Test Report: Docker_Linux_containerd_arm64 22127

                    
                      087e852008767f332c662fe76eaa150bb5f9e6c8:2025-12-13:42757
                    
                

Test fail (34/417)

Order failed test Duration
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 503.24
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 368.77
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.28
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.27
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.31
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 736.22
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.19
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.73
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.19
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.51
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.67
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 1.44
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.5
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.12
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 121.7
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.06
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.27
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.27
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.26
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.28
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.26
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.55
358 TestKubernetesUpgrade 800.64
413 TestStartStop/group/no-preload/serial/FirstStart 514.7
437 TestStartStop/group/newest-cni/serial/FirstStart 502.28
438 TestStartStop/group/no-preload/serial/DeployApp 3.09
439 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 85.31
442 TestStartStop/group/no-preload/serial/SecondStart 369.99
444 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 104.06
447 TestStartStop/group/newest-cni/serial/SecondStart 374.01
448 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.74
452 TestStartStop/group/newest-cni/serial/Pause 9.68
467 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 284.49
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (503.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-652709 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1213 10:22:56.110881  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:25:12.241115  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:25:39.953835  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:48.083780  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:48.090181  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:48.101659  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:48.123210  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:48.164688  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:48.246280  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:48.407806  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:48.729538  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:49.371660  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:50.653380  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:53.218792  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:58.340647  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:27:08.582159  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:27:29.063620  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:28:10.025144  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:29:31.950182  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:30:12.241078  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-652709 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m21.741312291s)

                                                
                                                
-- stdout --
	* [functional-652709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-652709" primary control-plane node in "functional-652709" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Found network options:
	  - HTTP_PROXY=localhost:46303
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:46303 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-652709 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-652709 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001296773s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000358804s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000358804s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-652709 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-652709
helpers_test.go:244: (dbg) docker inspect functional-652709:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f",
	        "Created": "2025-12-13T10:22:44.366993781Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 347931,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:22:44.437030763Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/hosts",
	        "LogPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f-json.log",
	        "Name": "/functional-652709",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-652709:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-652709",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f",
	                "LowerDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095-init/diff:/var/lib/docker/overlay2/5d0ca86320f2c06765995d644a73af30cd6ddb36f5c1a2a6b1ebb24af53cdabd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/merged",
	                "UpperDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/diff",
	                "WorkDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-652709",
	                "Source": "/var/lib/docker/volumes/functional-652709/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-652709",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-652709",
	                "name.minikube.sigs.k8s.io": "functional-652709",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "52e527b5bd789a02eb7efb651200033ed4929e5fc7545e9df042d3f777cc9782",
	            "SandboxKey": "/var/run/docker/netns/52e527b5bd78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-652709": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:23:08:9e:cb:13",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "344f2b940117dadb28d1ef1328f911c0446307288fdfafebfe59f38e473f79cb",
	                    "EndpointID": "8954f96e5987202be5715e7023384fe862744778b2520bccba28c57814f0980f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-652709",
	                        "0f6101071ca2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-652709 -n functional-652709
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-652709 -n functional-652709: exit status 6 (350.281966ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:31:01.479874  353104 status.go:458] kubeconfig endpoint: get endpoint: "functional-652709" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons         │ addons-672850 addons disable ingress --alsologtostderr -v=1                                                                                             │ addons-672850     │ jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
	│ stop           │ -p addons-672850                                                                                                                                        │ addons-672850     │ jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:18 UTC │
	│ addons         │ enable dashboard -p addons-672850                                                                                                                       │ addons-672850     │ jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │ 13 Dec 25 10:18 UTC │
	│ addons         │ disable dashboard -p addons-672850                                                                                                                      │ addons-672850     │ jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │ 13 Dec 25 10:18 UTC │
	│ addons         │ disable gvisor -p addons-672850                                                                                                                         │ addons-672850     │ jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │ 13 Dec 25 10:18 UTC │
	│ delete         │ -p addons-672850                                                                                                                                        │ addons-672850     │ jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │ 13 Dec 25 10:18 UTC │
	│ start          │ -p dockerenv-403574 --driver=docker  --container-runtime=containerd                                                                                     │ dockerenv-403574  │ jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │ 13 Dec 25 10:18 UTC │
	│ docker-env     │ --ssh-host --ssh-add -p dockerenv-403574                                                                                                                │ dockerenv-403574  │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ delete         │ -p dockerenv-403574                                                                                                                                     │ dockerenv-403574  │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ start          │ -p nospam-462625 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-462625 --driver=docker  --container-runtime=containerd                           │ nospam-462625     │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ start          │ nospam-462625 --log_dir /tmp/nospam-462625 start --dry-run                                                                                              │ nospam-462625     │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │                     │
	│ start          │ nospam-462625 --log_dir /tmp/nospam-462625 start --dry-run                                                                                              │ nospam-462625     │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │                     │
	│ start          │ nospam-462625 --log_dir /tmp/nospam-462625 start --dry-run                                                                                              │ nospam-462625     │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │                     │
	│ pause          │ nospam-462625 --log_dir /tmp/nospam-462625 pause                                                                                                        │ nospam-462625     │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ pause          │ nospam-462625 --log_dir /tmp/nospam-462625 pause                                                                                                        │ nospam-462625     │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ update-context │ functional-319494 update-context --alsologtostderr -v=2                                                                                                 │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image          │ functional-319494 image ls --format short --alsologtostderr                                                                                             │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image          │ functional-319494 image ls --format yaml --alsologtostderr                                                                                              │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ ssh            │ functional-319494 ssh pgrep buildkitd                                                                                                                   │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │                     │
	│ image          │ functional-319494 image ls --format json --alsologtostderr                                                                                              │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image          │ functional-319494 image build -t localhost/my-image:functional-319494 testdata/build --alsologtostderr                                                  │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image          │ functional-319494 image ls --format table --alsologtostderr                                                                                             │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image          │ functional-319494 image ls                                                                                                                              │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ delete         │ -p functional-319494                                                                                                                                    │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ start          │ -p functional-652709 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:22:39
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:22:39.421960  347534 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:22:39.422061  347534 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:22:39.422065  347534 out.go:374] Setting ErrFile to fd 2...
	I1213 10:22:39.422069  347534 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:22:39.422314  347534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 10:22:39.422757  347534 out.go:368] Setting JSON to false
	I1213 10:22:39.423550  347534 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11112,"bootTime":1765610247,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 10:22:39.423602  347534 start.go:143] virtualization:  
	I1213 10:22:39.427949  347534 out.go:179] * [functional-652709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:22:39.432335  347534 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:22:39.432462  347534 notify.go:221] Checking for updates...
	I1213 10:22:39.439559  347534 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:22:39.442888  347534 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:22:39.446129  347534 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 10:22:39.449270  347534 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:22:39.452443  347534 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:22:39.455730  347534 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:22:39.489393  347534 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:22:39.489505  347534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:22:39.543781  347534 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-13 10:22:39.533679294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:22:39.543875  347534 docker.go:319] overlay module found
	I1213 10:22:39.547092  347534 out.go:179] * Using the docker driver based on user configuration
	I1213 10:22:39.550078  347534 start.go:309] selected driver: docker
	I1213 10:22:39.550085  347534 start.go:927] validating driver "docker" against <nil>
	I1213 10:22:39.550097  347534 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:22:39.550915  347534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:22:39.622622  347534 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-13 10:22:39.610875453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:22:39.622857  347534 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:22:39.623165  347534 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:22:39.626177  347534 out.go:179] * Using Docker driver with root privileges
	I1213 10:22:39.629190  347534 cni.go:84] Creating CNI manager for ""
	I1213 10:22:39.629268  347534 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:22:39.629279  347534 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 10:22:39.629390  347534 start.go:353] cluster config:
	{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:22:39.632752  347534 out.go:179] * Starting "functional-652709" primary control-plane node in "functional-652709" cluster
	I1213 10:22:39.635589  347534 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 10:22:39.638791  347534 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:22:39.641667  347534 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:22:39.641710  347534 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 10:22:39.641720  347534 cache.go:65] Caching tarball of preloaded images
	I1213 10:22:39.641744  347534 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:22:39.641829  347534 preload.go:238] Found /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 10:22:39.641839  347534 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 10:22:39.642240  347534 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/config.json ...
	I1213 10:22:39.642267  347534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/config.json: {Name:mkdd6dba0d583de35ce43823020b0dfb44a1a137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:22:39.662516  347534 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:22:39.662527  347534 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:22:39.662545  347534 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:22:39.662579  347534 start.go:360] acquireMachinesLock for functional-652709: {Name:mk6e8c40fbbb5af0bb2468340fd710875030300d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:22:39.662677  347534 start.go:364] duration metric: took 84.768µs to acquireMachinesLock for "functional-652709"
	I1213 10:22:39.662727  347534 start.go:93] Provisioning new machine with config: &{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 10:22:39.662836  347534 start.go:125] createHost starting for "" (driver="docker")
	I1213 10:22:39.666292  347534 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1213 10:22:39.666632  347534 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:46303 to docker env.
	I1213 10:22:39.666659  347534 start.go:159] libmachine.API.Create for "functional-652709" (driver="docker")
	I1213 10:22:39.666715  347534 client.go:173] LocalClient.Create starting
	I1213 10:22:39.666828  347534 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem
	I1213 10:22:39.666870  347534 main.go:143] libmachine: Decoding PEM data...
	I1213 10:22:39.666898  347534 main.go:143] libmachine: Parsing certificate...
	I1213 10:22:39.666965  347534 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem
	I1213 10:22:39.666984  347534 main.go:143] libmachine: Decoding PEM data...
	I1213 10:22:39.666994  347534 main.go:143] libmachine: Parsing certificate...
	I1213 10:22:39.667417  347534 cli_runner.go:164] Run: docker network inspect functional-652709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 10:22:39.684615  347534 cli_runner.go:211] docker network inspect functional-652709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 10:22:39.684708  347534 network_create.go:284] running [docker network inspect functional-652709] to gather additional debugging logs...
	I1213 10:22:39.684725  347534 cli_runner.go:164] Run: docker network inspect functional-652709
	W1213 10:22:39.700381  347534 cli_runner.go:211] docker network inspect functional-652709 returned with exit code 1
	I1213 10:22:39.700411  347534 network_create.go:287] error running [docker network inspect functional-652709]: docker network inspect functional-652709: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-652709 not found
	I1213 10:22:39.700425  347534 network_create.go:289] output of [docker network inspect functional-652709]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-652709 not found
	
	** /stderr **
	I1213 10:22:39.700522  347534 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:22:39.717552  347534 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400194d540}
	I1213 10:22:39.717582  347534 network_create.go:124] attempt to create docker network functional-652709 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1213 10:22:39.717644  347534 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-652709 functional-652709
	I1213 10:22:39.786135  347534 network_create.go:108] docker network functional-652709 192.168.49.0/24 created
	I1213 10:22:39.786166  347534 kic.go:121] calculated static IP "192.168.49.2" for the "functional-652709" container
	I1213 10:22:39.786258  347534 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 10:22:39.801500  347534 cli_runner.go:164] Run: docker volume create functional-652709 --label name.minikube.sigs.k8s.io=functional-652709 --label created_by.minikube.sigs.k8s.io=true
	I1213 10:22:39.819680  347534 oci.go:103] Successfully created a docker volume functional-652709
	I1213 10:22:39.819773  347534 cli_runner.go:164] Run: docker run --rm --name functional-652709-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-652709 --entrypoint /usr/bin/test -v functional-652709:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 10:22:40.392339  347534 oci.go:107] Successfully prepared a docker volume functional-652709
	I1213 10:22:40.392403  347534 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:22:40.392410  347534 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 10:22:40.392484  347534 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-652709:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 10:22:44.282605  347534 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-652709:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.890086063s)
	I1213 10:22:44.282627  347534 kic.go:203] duration metric: took 3.890213564s to extract preloaded images to volume ...
	W1213 10:22:44.282821  347534 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 10:22:44.282918  347534 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 10:22:44.352211  347534 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-652709 --name functional-652709 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-652709 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-652709 --network functional-652709 --ip 192.168.49.2 --volume functional-652709:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 10:22:44.665912  347534 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Running}}
	I1213 10:22:44.689544  347534 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:22:44.718557  347534 cli_runner.go:164] Run: docker exec functional-652709 stat /var/lib/dpkg/alternatives/iptables
	I1213 10:22:44.769260  347534 oci.go:144] the created container "functional-652709" has a running status.
	I1213 10:22:44.769280  347534 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa...
	I1213 10:22:44.807632  347534 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 10:22:44.830872  347534 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:22:44.853184  347534 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 10:22:44.853196  347534 kic_runner.go:114] Args: [docker exec --privileged functional-652709 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 10:22:44.906572  347534 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:22:44.928631  347534 machine.go:94] provisionDockerMachine start ...
	I1213 10:22:44.928734  347534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:22:44.952127  347534 main.go:143] libmachine: Using SSH client type: native
	I1213 10:22:44.952486  347534 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:22:44.952503  347534 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:22:44.953140  347534 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 10:22:48.106435  347534 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-652709
	
	I1213 10:22:48.106459  347534 ubuntu.go:182] provisioning hostname "functional-652709"
	I1213 10:22:48.106528  347534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:22:48.124661  347534 main.go:143] libmachine: Using SSH client type: native
	I1213 10:22:48.125032  347534 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:22:48.125042  347534 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-652709 && echo "functional-652709" | sudo tee /etc/hostname
	I1213 10:22:48.284292  347534 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-652709
	
	I1213 10:22:48.284364  347534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:22:48.301930  347534 main.go:143] libmachine: Using SSH client type: native
	I1213 10:22:48.302240  347534 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:22:48.302254  347534 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-652709' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-652709/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-652709' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:22:48.451049  347534 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:22:48.451066  347534 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 10:22:48.451095  347534 ubuntu.go:190] setting up certificates
	I1213 10:22:48.451103  347534 provision.go:84] configureAuth start
	I1213 10:22:48.451161  347534 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-652709
	I1213 10:22:48.469143  347534 provision.go:143] copyHostCerts
	I1213 10:22:48.469210  347534 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 10:22:48.469218  347534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 10:22:48.469294  347534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 10:22:48.469390  347534 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 10:22:48.469394  347534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 10:22:48.469422  347534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 10:22:48.469472  347534 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 10:22:48.469475  347534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 10:22:48.469498  347534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 10:22:48.469541  347534 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.functional-652709 san=[127.0.0.1 192.168.49.2 functional-652709 localhost minikube]
	I1213 10:22:48.570422  347534 provision.go:177] copyRemoteCerts
	I1213 10:22:48.570474  347534 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:22:48.570512  347534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:22:48.587198  347534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:22:48.690421  347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:22:48.707570  347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:22:48.725248  347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:22:48.742477  347534 provision.go:87] duration metric: took 291.350942ms to configureAuth
	I1213 10:22:48.742495  347534 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:22:48.742786  347534 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:22:48.742795  347534 machine.go:97] duration metric: took 3.8141523s to provisionDockerMachine
	I1213 10:22:48.742801  347534 client.go:176] duration metric: took 9.076081213s to LocalClient.Create
	I1213 10:22:48.742825  347534 start.go:167] duration metric: took 9.076167573s to libmachine.API.Create "functional-652709"
	I1213 10:22:48.742832  347534 start.go:293] postStartSetup for "functional-652709" (driver="docker")
	I1213 10:22:48.742841  347534 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:22:48.742897  347534 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:22:48.742977  347534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:22:48.760384  347534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:22:48.866658  347534 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:22:48.870023  347534 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:22:48.870040  347534 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:22:48.870050  347534 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 10:22:48.870112  347534 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 10:22:48.870201  347534 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 10:22:48.870281  347534 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts -> hosts in /etc/test/nested/copy/308915
	I1213 10:22:48.870329  347534 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/308915
	I1213 10:22:48.877987  347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 10:22:48.895388  347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts --> /etc/test/nested/copy/308915/hosts (40 bytes)
	I1213 10:22:48.913143  347534 start.go:296] duration metric: took 170.296932ms for postStartSetup
	I1213 10:22:48.913501  347534 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-652709
	I1213 10:22:48.931292  347534 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/config.json ...
	I1213 10:22:48.931575  347534 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:22:48.931614  347534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:22:48.948293  347534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:22:49.051913  347534 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:22:49.056599  347534 start.go:128] duration metric: took 9.393749657s to createHost
	I1213 10:22:49.056613  347534 start.go:83] releasing machines lock for "functional-652709", held for 9.393929393s
	I1213 10:22:49.056687  347534 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-652709
	I1213 10:22:49.077337  347534 out.go:179] * Found network options:
	I1213 10:22:49.080202  347534 out.go:179]   - HTTP_PROXY=localhost:46303
	W1213 10:22:49.083133  347534 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1213 10:22:49.086024  347534 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1213 10:22:49.088946  347534 ssh_runner.go:195] Run: cat /version.json
	I1213 10:22:49.089012  347534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:22:49.089013  347534 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:22:49.089069  347534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:22:49.117533  347534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:22:49.124293  347534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:22:49.222429  347534 ssh_runner.go:195] Run: systemctl --version
	I1213 10:22:49.315468  347534 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:22:49.319814  347534 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:22:49.319884  347534 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:22:49.345944  347534 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 10:22:49.345957  347534 start.go:496] detecting cgroup driver to use...
	I1213 10:22:49.345987  347534 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:22:49.346035  347534 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 10:22:49.360971  347534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:22:49.373747  347534 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:22:49.373807  347534 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:22:49.391534  347534 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:22:49.410094  347534 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:22:49.528445  347534 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:22:49.657591  347534 docker.go:234] disabling docker service ...
	I1213 10:22:49.657663  347534 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:22:49.682596  347534 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:22:49.695994  347534 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:22:49.811277  347534 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:22:49.922872  347534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:22:49.936025  347534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:22:49.949772  347534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:22:49.959041  347534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:22:49.968328  347534 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:22:49.968395  347534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:22:49.977747  347534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:22:49.987321  347534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:22:49.997250  347534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:22:50.019551  347534 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:22:50.028873  347534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:22:50.039465  347534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:22:50.049007  347534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:22:50.058658  347534 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:22:50.066833  347534 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:22:50.074883  347534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:22:50.207353  347534 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:22:50.344034  347534 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 10:22:50.344107  347534 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 10:22:50.348005  347534 start.go:564] Will wait 60s for crictl version
	I1213 10:22:50.348057  347534 ssh_runner.go:195] Run: which crictl
	I1213 10:22:50.351497  347534 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:22:50.374963  347534 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 10:22:50.375051  347534 ssh_runner.go:195] Run: containerd --version
	I1213 10:22:50.395858  347534 ssh_runner.go:195] Run: containerd --version
	I1213 10:22:50.421963  347534 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 10:22:50.425068  347534 cli_runner.go:164] Run: docker network inspect functional-652709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:22:50.442871  347534 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:22:50.446595  347534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:22:50.457009  347534 kubeadm.go:884] updating cluster {Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:22:50.457115  347534 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:22:50.457188  347534 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:22:50.482070  347534 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:22:50.482082  347534 containerd.go:534] Images already preloaded, skipping extraction
	I1213 10:22:50.482140  347534 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:22:50.509294  347534 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:22:50.509306  347534 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:22:50.509312  347534 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 10:22:50.509401  347534 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-652709 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:22:50.509462  347534 ssh_runner.go:195] Run: sudo crictl info
	I1213 10:22:50.534821  347534 cni.go:84] Creating CNI manager for ""
	I1213 10:22:50.534833  347534 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:22:50.534848  347534 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:22:50.534870  347534 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-652709 NodeName:functional-652709 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:22:50.534994  347534 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-652709"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:22:50.535059  347534 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:22:50.542985  347534 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:22:50.543043  347534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:22:50.550654  347534 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 10:22:50.563543  347534 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:22:50.577040  347534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 10:22:50.590534  347534 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:22:50.594062  347534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:22:50.603707  347534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:22:50.719523  347534 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:22:50.736125  347534 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709 for IP: 192.168.49.2
	I1213 10:22:50.736136  347534 certs.go:195] generating shared ca certs ...
	I1213 10:22:50.736150  347534 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:22:50.736314  347534 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 10:22:50.736357  347534 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 10:22:50.736364  347534 certs.go:257] generating profile certs ...
	I1213 10:22:50.736418  347534 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.key
	I1213 10:22:50.736427  347534 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt with IP's: []
	I1213 10:22:51.182304  347534 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt ...
	I1213 10:22:51.182320  347534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: {Name:mke1b6d7e6424580fd39d75cde2a9ed5cfcf2718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:22:51.182527  347534 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.key ...
	I1213 10:22:51.182533  347534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.key: {Name:mkaec4010f0e41820acbef473dc41ecc4824f0f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:22:51.182630  347534 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key.86e7afd1
	I1213 10:22:51.182641  347534 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.crt.86e7afd1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1213 10:22:51.355751  347534 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.crt.86e7afd1 ...
	I1213 10:22:51.355766  347534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.crt.86e7afd1: {Name:mk625ae06d684270f1a880352d905723e4d9cae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:22:51.355944  347534 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key.86e7afd1 ...
	I1213 10:22:51.355955  347534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key.86e7afd1: {Name:mk46d2406f9ca3b99d70727d83a0decbfedb1fe4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:22:51.356035  347534 certs.go:382] copying /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.crt.86e7afd1 -> /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.crt
	I1213 10:22:51.356141  347534 certs.go:386] copying /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key.86e7afd1 -> /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key
	I1213 10:22:51.356232  347534 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key
	I1213 10:22:51.356243  347534 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.crt with IP's: []
	I1213 10:22:51.538627  347534 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.crt ...
	I1213 10:22:51.538641  347534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.crt: {Name:mkaec43616cd7e9caf744292336b5f7c8de54b91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:22:51.538831  347534 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key ...
	I1213 10:22:51.538840  347534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key: {Name:mk250d4e62be0df79e0e99621e601bc0253543ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:22:51.539033  347534 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 10:22:51.539074  347534 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 10:22:51.539081  347534 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:22:51.539113  347534 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:22:51.539138  347534 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:22:51.539162  347534 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 10:22:51.539208  347534 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 10:22:51.539807  347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:22:51.558901  347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:22:51.576898  347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:22:51.594740  347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:22:51.613783  347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:22:51.632331  347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:22:51.649950  347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:22:51.667691  347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:22:51.685767  347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 10:22:51.703954  347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 10:22:51.722146  347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:22:51.739724  347534 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:22:51.752504  347534 ssh_runner.go:195] Run: openssl version
	I1213 10:22:51.758624  347534 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 10:22:51.766277  347534 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 10:22:51.773619  347534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 10:22:51.777349  347534 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 10:22:51.777407  347534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 10:22:51.818655  347534 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:22:51.826274  347534 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/308915.pem /etc/ssl/certs/51391683.0
	I1213 10:22:51.833740  347534 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 10:22:51.841289  347534 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 10:22:51.848802  347534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 10:22:51.852407  347534 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 10:22:51.852462  347534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 10:22:51.893814  347534 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:22:51.901368  347534 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3089152.pem /etc/ssl/certs/3ec20f2e.0
	I1213 10:22:51.908943  347534 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:22:51.916542  347534 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:22:51.924152  347534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:22:51.927806  347534 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:22:51.927863  347534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:22:51.968628  347534 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:22:51.976324  347534 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 10:22:51.984190  347534 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:22:51.987955  347534 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 10:22:51.988010  347534 kubeadm.go:401] StartCluster: {Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:22:51.988103  347534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 10:22:51.988161  347534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:22:52.023448  347534 cri.go:89] found id: ""
	I1213 10:22:52.023514  347534 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:22:52.031813  347534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:22:52.040029  347534 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:22:52.040106  347534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:22:52.048323  347534 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:22:52.048341  347534 kubeadm.go:158] found existing configuration files:
	
	I1213 10:22:52.048403  347534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:22:52.056328  347534 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:22:52.056393  347534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:22:52.064100  347534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:22:52.072005  347534 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:22:52.072061  347534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:22:52.079997  347534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:22:52.088156  347534 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:22:52.088216  347534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:22:52.096056  347534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:22:52.104211  347534 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:22:52.104272  347534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:22:52.112247  347534 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:22:52.152816  347534 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:22:52.152866  347534 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:22:52.255492  347534 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:22:52.255557  347534 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:22:52.255592  347534 kubeadm.go:319] OS: Linux
	I1213 10:22:52.255635  347534 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:22:52.255682  347534 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:22:52.255728  347534 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:22:52.255775  347534 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:22:52.255822  347534 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:22:52.255870  347534 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:22:52.255914  347534 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:22:52.255961  347534 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:22:52.256006  347534 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:22:52.326450  347534 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:22:52.326553  347534 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:22:52.326642  347534 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:22:52.335210  347534 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:22:52.341846  347534 out.go:252]   - Generating certificates and keys ...
	I1213 10:22:52.341957  347534 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:22:52.342034  347534 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:22:52.644701  347534 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 10:22:52.704498  347534 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 10:22:52.866219  347534 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 10:22:53.548467  347534 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 10:22:54.312965  347534 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 10:22:54.313105  347534 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-652709 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 10:22:54.783858  347534 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 10:22:54.784181  347534 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-652709 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 10:22:54.991172  347534 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 10:22:55.395053  347534 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 10:22:55.782378  347534 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 10:22:55.782452  347534 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:22:56.182323  347534 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:22:56.665880  347534 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:22:56.877677  347534 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:22:57.168905  347534 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:22:58.263186  347534 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:22:58.263819  347534 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:22:58.266856  347534 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:22:58.270397  347534 out.go:252]   - Booting up control plane ...
	I1213 10:22:58.270498  347534 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:22:58.270574  347534 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:22:58.270640  347534 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:22:58.285618  347534 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:22:58.285850  347534 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:22:58.295228  347534 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:22:58.295330  347534 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:22:58.295369  347534 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:22:58.423186  347534 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:22:58.423299  347534 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:26:58.419237  347534 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001296773s
	I1213 10:26:58.419260  347534 kubeadm.go:319] 
	I1213 10:26:58.419312  347534 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:26:58.419343  347534 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:26:58.419440  347534 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:26:58.419445  347534 kubeadm.go:319] 
	I1213 10:26:58.419542  347534 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:26:58.419571  347534 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:26:58.419599  347534 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:26:58.419602  347534 kubeadm.go:319] 
	I1213 10:26:58.425066  347534 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:26:58.425466  347534 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:26:58.425567  347534 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:26:58.425788  347534 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:26:58.425792  347534 kubeadm.go:319] 
	I1213 10:26:58.425855  347534 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 10:26:58.425960  347534 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-652709 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-652709 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001296773s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 10:26:58.426053  347534 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 10:26:58.845571  347534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:26:58.858891  347534 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:26:58.858946  347534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:26:58.866822  347534 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:26:58.866831  347534 kubeadm.go:158] found existing configuration files:
	
	I1213 10:26:58.866880  347534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:26:58.874351  347534 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:26:58.874411  347534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:26:58.881444  347534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:26:58.889131  347534 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:26:58.889192  347534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:26:58.896496  347534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:26:58.903940  347534 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:26:58.903995  347534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:26:58.911383  347534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:26:58.918878  347534 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:26:58.918935  347534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:26:58.926388  347534 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:26:58.966153  347534 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:26:58.966388  347534 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:26:59.038263  347534 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:26:59.038324  347534 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:26:59.038360  347534 kubeadm.go:319] OS: Linux
	I1213 10:26:59.038402  347534 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:26:59.038445  347534 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:26:59.038488  347534 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:26:59.038533  347534 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:26:59.038577  347534 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:26:59.038621  347534 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:26:59.038662  347534 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:26:59.038718  347534 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:26:59.038761  347534 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:26:59.101715  347534 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:26:59.101849  347534 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:26:59.101956  347534 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:26:59.111063  347534 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:26:59.114418  347534 out.go:252]   - Generating certificates and keys ...
	I1213 10:26:59.114518  347534 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:26:59.114598  347534 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:26:59.114681  347534 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:26:59.114803  347534 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:26:59.114868  347534 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:26:59.114918  347534 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:26:59.114983  347534 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:26:59.115048  347534 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:26:59.115144  347534 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:26:59.115222  347534 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:26:59.115258  347534 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:26:59.115311  347534 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:26:59.170504  347534 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:26:59.450680  347534 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:26:59.886874  347534 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:27:00.166515  347534 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:27:00.523183  347534 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:27:00.523691  347534 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:27:00.526396  347534 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:27:00.529466  347534 out.go:252]   - Booting up control plane ...
	I1213 10:27:00.529571  347534 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:27:00.529648  347534 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:27:00.529714  347534 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:27:00.550934  347534 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:27:00.551347  347534 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:27:00.559929  347534 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:27:00.560370  347534 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:27:00.560605  347534 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:27:00.691215  347534 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:27:00.691328  347534 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:31:00.687036  347534 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000358804s
	I1213 10:31:00.695089  347534 kubeadm.go:319] 
	I1213 10:31:00.695237  347534 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:31:00.695275  347534 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:31:00.695400  347534 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:31:00.695405  347534 kubeadm.go:319] 
	I1213 10:31:00.695529  347534 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:31:00.695562  347534 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:31:00.695600  347534 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:31:00.695604  347534 kubeadm.go:319] 
	I1213 10:31:00.700193  347534 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:31:00.700668  347534 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:31:00.700794  347534 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:31:00.701077  347534 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 10:31:00.701082  347534 kubeadm.go:319] 
	I1213 10:31:00.701155  347534 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:31:00.701216  347534 kubeadm.go:403] duration metric: took 8m8.71320916s to StartCluster
	I1213 10:31:00.701258  347534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:31:00.701341  347534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:31:00.727702  347534 cri.go:89] found id: ""
	I1213 10:31:00.727730  347534 logs.go:282] 0 containers: []
	W1213 10:31:00.727737  347534 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:31:00.727743  347534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:31:00.727810  347534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:31:00.752310  347534 cri.go:89] found id: ""
	I1213 10:31:00.752325  347534 logs.go:282] 0 containers: []
	W1213 10:31:00.752332  347534 logs.go:284] No container was found matching "etcd"
	I1213 10:31:00.752336  347534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:31:00.752393  347534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:31:00.776946  347534 cri.go:89] found id: ""
	I1213 10:31:00.776960  347534 logs.go:282] 0 containers: []
	W1213 10:31:00.776967  347534 logs.go:284] No container was found matching "coredns"
	I1213 10:31:00.776972  347534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:31:00.777027  347534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:31:00.800007  347534 cri.go:89] found id: ""
	I1213 10:31:00.800021  347534 logs.go:282] 0 containers: []
	W1213 10:31:00.800028  347534 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:31:00.800033  347534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:31:00.800091  347534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:31:00.824757  347534 cri.go:89] found id: ""
	I1213 10:31:00.824771  347534 logs.go:282] 0 containers: []
	W1213 10:31:00.824778  347534 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:31:00.824783  347534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:31:00.824840  347534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:31:00.849594  347534 cri.go:89] found id: ""
	I1213 10:31:00.849608  347534 logs.go:282] 0 containers: []
	W1213 10:31:00.849615  347534 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:31:00.849622  347534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:31:00.849680  347534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:31:00.877004  347534 cri.go:89] found id: ""
	I1213 10:31:00.877019  347534 logs.go:282] 0 containers: []
	W1213 10:31:00.877026  347534 logs.go:284] No container was found matching "kindnet"
	I1213 10:31:00.877035  347534 logs.go:123] Gathering logs for kubelet ...
	I1213 10:31:00.877046  347534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:31:00.933417  347534 logs.go:123] Gathering logs for dmesg ...
	I1213 10:31:00.933437  347534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:31:00.949839  347534 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:31:00.949858  347534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:31:01.016526  347534 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:31:01.007626    4805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:31:01.008312    4805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:31:01.010133    4805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:31:01.010946    4805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:31:01.012542    4805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:31:01.007626    4805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:31:01.008312    4805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:31:01.010133    4805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:31:01.010946    4805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:31:01.012542    4805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:31:01.016537  347534 logs.go:123] Gathering logs for containerd ...
	I1213 10:31:01.016548  347534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:31:01.054604  347534 logs.go:123] Gathering logs for container status ...
	I1213 10:31:01.054625  347534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:31:01.084905  347534 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000358804s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:31:01.084943  347534 out.go:285] * 
	W1213 10:31:01.085001  347534 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000358804s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:31:01.085017  347534 out.go:285] * 
	W1213 10:31:01.091466  347534 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:31:01.098112  347534 out.go:203] 
	W1213 10:31:01.101014  347534 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000358804s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:31:01.101069  347534 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:31:01.101089  347534 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:31:01.104082  347534 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.286559306Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.286625243Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.286802697Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.286894546Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.286955101Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.287017616Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.287084579Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.287157860Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.287231370Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.287320372Z" level=info msg="Connect containerd service"
	Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.287675083Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.288415070Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.302170119Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.302336151Z" level=info msg="Start subscribing containerd event"
	Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.302412771Z" level=info msg="Start recovering state"
	Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.302463783Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.340466841Z" level=info msg="Start event monitor"
	Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.340652516Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.340721850Z" level=info msg="Start streaming server"
	Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.340785925Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.340840260Z" level=info msg="runtime interface starting up..."
	Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.340890927Z" level=info msg="starting plugins..."
	Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.340987814Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.341230852Z" level=info msg="containerd successfully booted in 0.080598s"
	Dec 13 10:22:50 functional-652709 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:31:02.122187    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:31:02.122792    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:31:02.124658    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:31:02.125244    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:31:02.126987    4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +12.907693] overlayfs: idmapped layers are currently not supported
	[Dec13 09:25] overlayfs: idmapped layers are currently not supported
	[ +26.192425] overlayfs: idmapped layers are currently not supported
	[Dec13 09:26] overlayfs: idmapped layers are currently not supported
	[ +25.729788] overlayfs: idmapped layers are currently not supported
	[Dec13 09:27] overlayfs: idmapped layers are currently not supported
	[Dec13 09:28] overlayfs: idmapped layers are currently not supported
	[Dec13 09:31] overlayfs: idmapped layers are currently not supported
	[Dec13 09:32] overlayfs: idmapped layers are currently not supported
	[ +17.979093] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec13 09:43] overlayfs: idmapped layers are currently not supported
	[Dec13 09:45] overlayfs: idmapped layers are currently not supported
	[ +25.885102] overlayfs: idmapped layers are currently not supported
	[Dec13 09:46] overlayfs: idmapped layers are currently not supported
	[ +22.078149] overlayfs: idmapped layers are currently not supported
	[Dec13 09:47] overlayfs: idmapped layers are currently not supported
	[Dec13 09:48] overlayfs: idmapped layers are currently not supported
	[Dec13 09:49] overlayfs: idmapped layers are currently not supported
	[Dec13 09:51] overlayfs: idmapped layers are currently not supported
	[ +17.043564] overlayfs: idmapped layers are currently not supported
	[Dec13 09:52] overlayfs: idmapped layers are currently not supported
	[Dec13 09:53] overlayfs: idmapped layers are currently not supported
	[Dec13 10:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:19] hrtimer: interrupt took 21247146 ns
	
	
	==> kernel <==
	 10:31:02 up  3:13,  0 user,  load average: 0.32, 0.54, 1.04
	Linux functional-652709 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:30:59 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:30:59 functional-652709 kubelet[4723]: E1213 10:30:59.203770    4723 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:30:59 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:30:59 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:30:59 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 13 10:30:59 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:30:59 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:30:59 functional-652709 kubelet[4729]: E1213 10:30:59.955749    4729 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:30:59 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:30:59 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:31:00 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 13 10:31:00 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:31:00 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:31:00 functional-652709 kubelet[4734]: E1213 10:31:00.716183    4734 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:31:00 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:31:00 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:31:01 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 13 10:31:01 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:31:01 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:31:01 functional-652709 kubelet[4835]: E1213 10:31:01.457933    4835 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:31:01 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:31:01 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:31:02 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 13 10:31:02 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:31:02 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709: exit status 6 (341.316645ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:31:02.604146  353323 status.go:458] kubeconfig endpoint: get endpoint: "functional-652709" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-652709" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (503.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.77s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1213 10:31:02.621501  308915 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-652709 --alsologtostderr -v=8
E1213 10:31:48.080620  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:32:15.792397  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:35:12.242022  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:36:35.315239  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:36:48.080808  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-652709 --alsologtostderr -v=8: exit status 80 (6m5.819920218s)

                                                
                                                
-- stdout --
	* [functional-652709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-652709" primary control-plane node in "functional-652709" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:31:02.672113  353396 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:31:02.672249  353396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:31:02.672258  353396 out.go:374] Setting ErrFile to fd 2...
	I1213 10:31:02.672263  353396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:31:02.672511  353396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 10:31:02.672909  353396 out.go:368] Setting JSON to false
	I1213 10:31:02.673776  353396 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11616,"bootTime":1765610247,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 10:31:02.673896  353396 start.go:143] virtualization:  
	I1213 10:31:02.677410  353396 out.go:179] * [functional-652709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:31:02.681384  353396 notify.go:221] Checking for updates...
	I1213 10:31:02.681459  353396 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:31:02.684444  353396 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:31:02.687336  353396 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:31:02.690317  353396 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 10:31:02.693212  353396 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:31:02.696019  353396 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:31:02.699466  353396 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:31:02.699577  353396 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:31:02.725188  353396 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:31:02.725318  353396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:31:02.796082  353396 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:31:02.785556605 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:31:02.796187  353396 docker.go:319] overlay module found
	I1213 10:31:02.799378  353396 out.go:179] * Using the docker driver based on existing profile
	I1213 10:31:02.802341  353396 start.go:309] selected driver: docker
	I1213 10:31:02.802370  353396 start.go:927] validating driver "docker" against &{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:31:02.802524  353396 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:31:02.802652  353396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:31:02.859333  353396 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:31:02.849982894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:31:02.859762  353396 cni.go:84] Creating CNI manager for ""
	I1213 10:31:02.859824  353396 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:31:02.859884  353396 start.go:353] cluster config:
	{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:31:02.863117  353396 out.go:179] * Starting "functional-652709" primary control-plane node in "functional-652709" cluster
	I1213 10:31:02.865981  353396 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 10:31:02.868957  353396 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:31:02.871941  353396 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:31:02.871997  353396 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 10:31:02.872008  353396 cache.go:65] Caching tarball of preloaded images
	I1213 10:31:02.872055  353396 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:31:02.872104  353396 preload.go:238] Found /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 10:31:02.872129  353396 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 10:31:02.872236  353396 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/config.json ...
	I1213 10:31:02.890218  353396 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:31:02.890243  353396 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:31:02.890259  353396 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:31:02.890291  353396 start.go:360] acquireMachinesLock for functional-652709: {Name:mk6e8c40fbbb5af0bb2468340fd710875030300d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:31:02.890351  353396 start.go:364] duration metric: took 34.691µs to acquireMachinesLock for "functional-652709"
	I1213 10:31:02.890374  353396 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:31:02.890380  353396 fix.go:54] fixHost starting: 
	I1213 10:31:02.890658  353396 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:31:02.911217  353396 fix.go:112] recreateIfNeeded on functional-652709: state=Running err=<nil>
	W1213 10:31:02.911248  353396 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:31:02.914505  353396 out.go:252] * Updating the running docker "functional-652709" container ...
	I1213 10:31:02.914550  353396 machine.go:94] provisionDockerMachine start ...
	I1213 10:31:02.914653  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:02.937238  353396 main.go:143] libmachine: Using SSH client type: native
	I1213 10:31:02.937582  353396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:31:02.937592  353396 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:31:03.091334  353396 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-652709
	
	I1213 10:31:03.091359  353396 ubuntu.go:182] provisioning hostname "functional-652709"
	I1213 10:31:03.091424  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:03.110422  353396 main.go:143] libmachine: Using SSH client type: native
	I1213 10:31:03.110837  353396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:31:03.110855  353396 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-652709 && echo "functional-652709" | sudo tee /etc/hostname
	I1213 10:31:03.277113  353396 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-652709
	
	I1213 10:31:03.277196  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:03.294664  353396 main.go:143] libmachine: Using SSH client type: native
	I1213 10:31:03.295057  353396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:31:03.295079  353396 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-652709' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-652709/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-652709' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:31:03.447182  353396 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:31:03.447207  353396 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 10:31:03.447240  353396 ubuntu.go:190] setting up certificates
	I1213 10:31:03.447256  353396 provision.go:84] configureAuth start
	I1213 10:31:03.447330  353396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-652709
	I1213 10:31:03.465044  353396 provision.go:143] copyHostCerts
	I1213 10:31:03.465100  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 10:31:03.465141  353396 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 10:31:03.465148  353396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 10:31:03.465220  353396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 10:31:03.465329  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 10:31:03.465349  353396 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 10:31:03.465353  353396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 10:31:03.465383  353396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 10:31:03.465436  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 10:31:03.465453  353396 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 10:31:03.465457  353396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 10:31:03.465486  353396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 10:31:03.465541  353396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.functional-652709 san=[127.0.0.1 192.168.49.2 functional-652709 localhost minikube]
	I1213 10:31:03.927648  353396 provision.go:177] copyRemoteCerts
	I1213 10:31:03.927724  353396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:31:03.927763  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:03.947692  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:04.064623  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 10:31:04.064688  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:31:04.082355  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 10:31:04.082418  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:31:04.100866  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 10:31:04.100930  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:31:04.121259  353396 provision.go:87] duration metric: took 673.978127ms to configureAuth
	I1213 10:31:04.121312  353396 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:31:04.121495  353396 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:31:04.121509  353396 machine.go:97] duration metric: took 1.206951102s to provisionDockerMachine
	I1213 10:31:04.121518  353396 start.go:293] postStartSetup for "functional-652709" (driver="docker")
	I1213 10:31:04.121529  353396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:31:04.121586  353396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:31:04.121633  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:04.139400  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:04.246752  353396 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:31:04.250273  353396 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 10:31:04.250297  353396 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 10:31:04.250302  353396 command_runner.go:130] > VERSION_ID="12"
	I1213 10:31:04.250307  353396 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 10:31:04.250312  353396 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 10:31:04.250316  353396 command_runner.go:130] > ID=debian
	I1213 10:31:04.250320  353396 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 10:31:04.250325  353396 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 10:31:04.250331  353396 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 10:31:04.250368  353396 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:31:04.250390  353396 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:31:04.250401  353396 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 10:31:04.250463  353396 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 10:31:04.250545  353396 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 10:31:04.250556  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> /etc/ssl/certs/3089152.pem
	I1213 10:31:04.250633  353396 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts -> hosts in /etc/test/nested/copy/308915
	I1213 10:31:04.250715  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts -> /etc/test/nested/copy/308915/hosts
	I1213 10:31:04.250766  353396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/308915
	I1213 10:31:04.258199  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 10:31:04.275892  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts --> /etc/test/nested/copy/308915/hosts (40 bytes)
	I1213 10:31:04.293256  353396 start.go:296] duration metric: took 171.721845ms for postStartSetup
	I1213 10:31:04.293373  353396 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:31:04.293418  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:04.310428  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:04.412061  353396 command_runner.go:130] > 11%
	I1213 10:31:04.412134  353396 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:31:04.417606  353396 command_runner.go:130] > 174G
	I1213 10:31:04.418241  353396 fix.go:56] duration metric: took 1.527856492s for fixHost
	I1213 10:31:04.418260  353396 start.go:83] releasing machines lock for "functional-652709", held for 1.527895524s
	I1213 10:31:04.418328  353396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-652709
	I1213 10:31:04.443217  353396 ssh_runner.go:195] Run: cat /version.json
	I1213 10:31:04.443268  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:04.443564  353396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:31:04.443617  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:04.481371  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:04.481516  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:04.669844  353396 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 10:31:04.669910  353396 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 10:31:04.670045  353396 ssh_runner.go:195] Run: systemctl --version
	I1213 10:31:04.676239  353396 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 10:31:04.676276  353396 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 10:31:04.676350  353396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 10:31:04.680689  353396 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 10:31:04.680854  353396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:31:04.680918  353396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:31:04.688793  353396 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:31:04.688818  353396 start.go:496] detecting cgroup driver to use...
	I1213 10:31:04.688851  353396 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:31:04.688909  353396 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 10:31:04.704425  353396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:31:04.717662  353396 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:31:04.717728  353396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:31:04.733551  353396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:31:04.746955  353396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:31:04.865557  353396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:31:04.977869  353396 docker.go:234] disabling docker service ...
	I1213 10:31:04.977950  353396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:31:04.992461  353396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:31:05.013428  353396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:31:05.135601  353396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:31:05.282715  353396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:31:05.296047  353396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:31:05.308957  353396 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1213 10:31:05.310188  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:31:05.319385  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:31:05.328561  353396 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:31:05.328627  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:31:05.337573  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:31:05.346847  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:31:05.355976  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:31:05.364985  353396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:31:05.373424  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:31:05.382892  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:31:05.391826  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:31:05.401136  353396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:31:05.407987  353396 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 10:31:05.408928  353396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:31:05.416444  353396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:31:05.526748  353396 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:31:05.655433  353396 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 10:31:05.655515  353396 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 10:31:05.659353  353396 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1213 10:31:05.659378  353396 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 10:31:05.659389  353396 command_runner.go:130] > Device: 0,72	Inode: 1622        Links: 1
	I1213 10:31:05.659396  353396 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:31:05.659402  353396 command_runner.go:130] > Access: 2025-12-13 10:31:05.610211940 +0000
	I1213 10:31:05.659407  353396 command_runner.go:130] > Modify: 2025-12-13 10:31:05.610211940 +0000
	I1213 10:31:05.659412  353396 command_runner.go:130] > Change: 2025-12-13 10:31:05.610211940 +0000
	I1213 10:31:05.659416  353396 command_runner.go:130] >  Birth: -
	I1213 10:31:05.660005  353396 start.go:564] Will wait 60s for crictl version
	I1213 10:31:05.660063  353396 ssh_runner.go:195] Run: which crictl
	I1213 10:31:05.663492  353396 command_runner.go:130] > /usr/local/bin/crictl
	I1213 10:31:05.663579  353396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:31:05.685881  353396 command_runner.go:130] > Version:  0.1.0
	I1213 10:31:05.685946  353396 command_runner.go:130] > RuntimeName:  containerd
	I1213 10:31:05.686097  353396 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1213 10:31:05.686253  353396 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 10:31:05.688463  353396 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 10:31:05.688528  353396 ssh_runner.go:195] Run: containerd --version
	I1213 10:31:05.706883  353396 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 10:31:05.709639  353396 ssh_runner.go:195] Run: containerd --version
	I1213 10:31:05.727187  353396 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 10:31:05.735610  353396 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 10:31:05.738579  353396 cli_runner.go:164] Run: docker network inspect functional-652709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:31:05.753316  353396 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:31:05.757039  353396 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 10:31:05.757213  353396 kubeadm.go:884] updating cluster {Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:31:05.757336  353396 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:31:05.757417  353396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:31:05.778952  353396 command_runner.go:130] > {
	I1213 10:31:05.778976  353396 command_runner.go:130] >   "images":  [
	I1213 10:31:05.778980  353396 command_runner.go:130] >     {
	I1213 10:31:05.778990  353396 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:31:05.778995  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779001  353396 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:31:05.779005  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779009  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779018  353396 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 10:31:05.779024  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779028  353396 command_runner.go:130] >       "size":  "40636774",
	I1213 10:31:05.779032  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779041  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779045  353396 command_runner.go:130] >     },
	I1213 10:31:05.779053  353396 command_runner.go:130] >     {
	I1213 10:31:05.779066  353396 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:31:05.779074  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779080  353396 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:31:05.779087  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779091  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779102  353396 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:31:05.779106  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779110  353396 command_runner.go:130] >       "size":  "8034419",
	I1213 10:31:05.779116  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779120  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779128  353396 command_runner.go:130] >     },
	I1213 10:31:05.779131  353396 command_runner.go:130] >     {
	I1213 10:31:05.779138  353396 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:31:05.779145  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779150  353396 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:31:05.779157  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779163  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779175  353396 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 10:31:05.779181  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779185  353396 command_runner.go:130] >       "size":  "21168808",
	I1213 10:31:05.779190  353396 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:31:05.779195  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779199  353396 command_runner.go:130] >     },
	I1213 10:31:05.779204  353396 command_runner.go:130] >     {
	I1213 10:31:05.779211  353396 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:31:05.779218  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779224  353396 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:31:05.779231  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779235  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779246  353396 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 10:31:05.779252  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779257  353396 command_runner.go:130] >       "size":  "21136588",
	I1213 10:31:05.779267  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.779275  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.779279  353396 command_runner.go:130] >       },
	I1213 10:31:05.779283  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779290  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779299  353396 command_runner.go:130] >     },
	I1213 10:31:05.779303  353396 command_runner.go:130] >     {
	I1213 10:31:05.779314  353396 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:31:05.779321  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779327  353396 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:31:05.779334  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779338  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779350  353396 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 10:31:05.779357  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779361  353396 command_runner.go:130] >       "size":  "24678359",
	I1213 10:31:05.779365  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.779375  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.779384  353396 command_runner.go:130] >       },
	I1213 10:31:05.779388  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779396  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779400  353396 command_runner.go:130] >     },
	I1213 10:31:05.779407  353396 command_runner.go:130] >     {
	I1213 10:31:05.779414  353396 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:31:05.779421  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779428  353396 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:31:05.779435  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779439  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779450  353396 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 10:31:05.779454  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779461  353396 command_runner.go:130] >       "size":  "20661043",
	I1213 10:31:05.779465  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.779473  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.779477  353396 command_runner.go:130] >       },
	I1213 10:31:05.779489  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779497  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779501  353396 command_runner.go:130] >     },
	I1213 10:31:05.779507  353396 command_runner.go:130] >     {
	I1213 10:31:05.779515  353396 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:31:05.779522  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779527  353396 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:31:05.779534  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779538  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779546  353396 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:31:05.779553  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779557  353396 command_runner.go:130] >       "size":  "22429671",
	I1213 10:31:05.779561  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779567  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779571  353396 command_runner.go:130] >     },
	I1213 10:31:05.779578  353396 command_runner.go:130] >     {
	I1213 10:31:05.779586  353396 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:31:05.779593  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779600  353396 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:31:05.779606  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779610  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779622  353396 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 10:31:05.779628  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779633  353396 command_runner.go:130] >       "size":  "15391364",
	I1213 10:31:05.779641  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.779645  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.779648  353396 command_runner.go:130] >       },
	I1213 10:31:05.779654  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779658  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779666  353396 command_runner.go:130] >     },
	I1213 10:31:05.779669  353396 command_runner.go:130] >     {
	I1213 10:31:05.779681  353396 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:31:05.779688  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779698  353396 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:31:05.779704  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779709  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779720  353396 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 10:31:05.779726  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779730  353396 command_runner.go:130] >       "size":  "267939",
	I1213 10:31:05.779735  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.779741  353396 command_runner.go:130] >         "value":  "65535"
	I1213 10:31:05.779744  353396 command_runner.go:130] >       },
	I1213 10:31:05.779753  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779758  353396 command_runner.go:130] >       "pinned":  true
	I1213 10:31:05.779764  353396 command_runner.go:130] >     }
	I1213 10:31:05.779767  353396 command_runner.go:130] >   ]
	I1213 10:31:05.779770  353396 command_runner.go:130] > }
	I1213 10:31:05.781791  353396 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:31:05.781813  353396 containerd.go:534] Images already preloaded, skipping extraction
	I1213 10:31:05.781881  353396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:31:05.805396  353396 command_runner.go:130] > {
	I1213 10:31:05.805420  353396 command_runner.go:130] >   "images":  [
	I1213 10:31:05.805426  353396 command_runner.go:130] >     {
	I1213 10:31:05.805436  353396 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:31:05.805441  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805447  353396 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:31:05.805452  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805456  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805465  353396 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 10:31:05.805471  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805477  353396 command_runner.go:130] >       "size":  "40636774",
	I1213 10:31:05.805485  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.805490  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.805501  353396 command_runner.go:130] >     },
	I1213 10:31:05.805504  353396 command_runner.go:130] >     {
	I1213 10:31:05.805512  353396 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:31:05.805517  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805523  353396 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:31:05.805528  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805543  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805556  353396 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:31:05.805566  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805576  353396 command_runner.go:130] >       "size":  "8034419",
	I1213 10:31:05.805580  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.805590  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.805594  353396 command_runner.go:130] >     },
	I1213 10:31:05.805601  353396 command_runner.go:130] >     {
	I1213 10:31:05.805608  353396 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:31:05.805619  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805625  353396 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:31:05.805630  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805655  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805669  353396 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 10:31:05.805675  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805680  353396 command_runner.go:130] >       "size":  "21168808",
	I1213 10:31:05.805687  353396 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:31:05.805693  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.805697  353396 command_runner.go:130] >     },
	I1213 10:31:05.805701  353396 command_runner.go:130] >     {
	I1213 10:31:05.805707  353396 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:31:05.805715  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805720  353396 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:31:05.805727  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805732  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805743  353396 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 10:31:05.805750  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805754  353396 command_runner.go:130] >       "size":  "21136588",
	I1213 10:31:05.805762  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.805772  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.805778  353396 command_runner.go:130] >       },
	I1213 10:31:05.805783  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.805787  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.805795  353396 command_runner.go:130] >     },
	I1213 10:31:05.805803  353396 command_runner.go:130] >     {
	I1213 10:31:05.805810  353396 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:31:05.805818  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805824  353396 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:31:05.805846  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805855  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805863  353396 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 10:31:05.805867  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805873  353396 command_runner.go:130] >       "size":  "24678359",
	I1213 10:31:05.805877  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.805891  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.805894  353396 command_runner.go:130] >       },
	I1213 10:31:05.805899  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.805906  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.805910  353396 command_runner.go:130] >     },
	I1213 10:31:05.805917  353396 command_runner.go:130] >     {
	I1213 10:31:05.805924  353396 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:31:05.805931  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805938  353396 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:31:05.805941  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805946  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805956  353396 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 10:31:05.805963  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805967  353396 command_runner.go:130] >       "size":  "20661043",
	I1213 10:31:05.805972  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.805979  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.805983  353396 command_runner.go:130] >       },
	I1213 10:31:05.805991  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.805995  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.806002  353396 command_runner.go:130] >     },
	I1213 10:31:05.806005  353396 command_runner.go:130] >     {
	I1213 10:31:05.806012  353396 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:31:05.806021  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.806032  353396 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:31:05.806036  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806040  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.806048  353396 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:31:05.806055  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806059  353396 command_runner.go:130] >       "size":  "22429671",
	I1213 10:31:05.806068  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.806072  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.806078  353396 command_runner.go:130] >     },
	I1213 10:31:05.806082  353396 command_runner.go:130] >     {
	I1213 10:31:05.806089  353396 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:31:05.806096  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.806101  353396 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:31:05.806109  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806113  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.806124  353396 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 10:31:05.806131  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806135  353396 command_runner.go:130] >       "size":  "15391364",
	I1213 10:31:05.806139  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.806147  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.806151  353396 command_runner.go:130] >       },
	I1213 10:31:05.806159  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.806164  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.806171  353396 command_runner.go:130] >     },
	I1213 10:31:05.806174  353396 command_runner.go:130] >     {
	I1213 10:31:05.806180  353396 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:31:05.806186  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.806191  353396 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:31:05.806197  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806202  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.806213  353396 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 10:31:05.806217  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806230  353396 command_runner.go:130] >       "size":  "267939",
	I1213 10:31:05.806238  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.806242  353396 command_runner.go:130] >         "value":  "65535"
	I1213 10:31:05.806251  353396 command_runner.go:130] >       },
	I1213 10:31:05.806255  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.806259  353396 command_runner.go:130] >       "pinned":  true
	I1213 10:31:05.806262  353396 command_runner.go:130] >     }
	I1213 10:31:05.806267  353396 command_runner.go:130] >   ]
	I1213 10:31:05.806271  353396 command_runner.go:130] > }
	I1213 10:31:05.808725  353396 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:31:05.808749  353396 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:31:05.808757  353396 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 10:31:05.808887  353396 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-652709 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:31:05.808967  353396 ssh_runner.go:195] Run: sudo crictl info
	I1213 10:31:05.831572  353396 command_runner.go:130] > {
	I1213 10:31:05.831594  353396 command_runner.go:130] >   "cniconfig": {
	I1213 10:31:05.831601  353396 command_runner.go:130] >     "Networks": [
	I1213 10:31:05.831604  353396 command_runner.go:130] >       {
	I1213 10:31:05.831609  353396 command_runner.go:130] >         "Config": {
	I1213 10:31:05.831614  353396 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1213 10:31:05.831619  353396 command_runner.go:130] >           "Name": "cni-loopback",
	I1213 10:31:05.831623  353396 command_runner.go:130] >           "Plugins": [
	I1213 10:31:05.831627  353396 command_runner.go:130] >             {
	I1213 10:31:05.831631  353396 command_runner.go:130] >               "Network": {
	I1213 10:31:05.831635  353396 command_runner.go:130] >                 "ipam": {},
	I1213 10:31:05.831641  353396 command_runner.go:130] >                 "type": "loopback"
	I1213 10:31:05.831650  353396 command_runner.go:130] >               },
	I1213 10:31:05.831662  353396 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1213 10:31:05.831670  353396 command_runner.go:130] >             }
	I1213 10:31:05.831674  353396 command_runner.go:130] >           ],
	I1213 10:31:05.831684  353396 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1213 10:31:05.831688  353396 command_runner.go:130] >         },
	I1213 10:31:05.831696  353396 command_runner.go:130] >         "IFName": "lo"
	I1213 10:31:05.831703  353396 command_runner.go:130] >       }
	I1213 10:31:05.831707  353396 command_runner.go:130] >     ],
	I1213 10:31:05.831712  353396 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1213 10:31:05.831720  353396 command_runner.go:130] >     "PluginDirs": [
	I1213 10:31:05.831724  353396 command_runner.go:130] >       "/opt/cni/bin"
	I1213 10:31:05.831731  353396 command_runner.go:130] >     ],
	I1213 10:31:05.831736  353396 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1213 10:31:05.831743  353396 command_runner.go:130] >     "Prefix": "eth"
	I1213 10:31:05.831747  353396 command_runner.go:130] >   },
	I1213 10:31:05.831754  353396 command_runner.go:130] >   "config": {
	I1213 10:31:05.831762  353396 command_runner.go:130] >     "cdiSpecDirs": [
	I1213 10:31:05.831765  353396 command_runner.go:130] >       "/etc/cdi",
	I1213 10:31:05.831781  353396 command_runner.go:130] >       "/var/run/cdi"
	I1213 10:31:05.831789  353396 command_runner.go:130] >     ],
	I1213 10:31:05.831793  353396 command_runner.go:130] >     "cni": {
	I1213 10:31:05.831797  353396 command_runner.go:130] >       "binDir": "",
	I1213 10:31:05.831801  353396 command_runner.go:130] >       "binDirs": [
	I1213 10:31:05.831810  353396 command_runner.go:130] >         "/opt/cni/bin"
	I1213 10:31:05.831814  353396 command_runner.go:130] >       ],
	I1213 10:31:05.831818  353396 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1213 10:31:05.831821  353396 command_runner.go:130] >       "confTemplate": "",
	I1213 10:31:05.831825  353396 command_runner.go:130] >       "ipPref": "",
	I1213 10:31:05.831829  353396 command_runner.go:130] >       "maxConfNum": 1,
	I1213 10:31:05.831832  353396 command_runner.go:130] >       "setupSerially": false,
	I1213 10:31:05.831837  353396 command_runner.go:130] >       "useInternalLoopback": false
	I1213 10:31:05.831840  353396 command_runner.go:130] >     },
	I1213 10:31:05.831851  353396 command_runner.go:130] >     "containerd": {
	I1213 10:31:05.831859  353396 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1213 10:31:05.831864  353396 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1213 10:31:05.831869  353396 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1213 10:31:05.831872  353396 command_runner.go:130] >       "runtimes": {
	I1213 10:31:05.831875  353396 command_runner.go:130] >         "runc": {
	I1213 10:31:05.831879  353396 command_runner.go:130] >           "ContainerAnnotations": null,
	I1213 10:31:05.831884  353396 command_runner.go:130] >           "PodAnnotations": null,
	I1213 10:31:05.831891  353396 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1213 10:31:05.831895  353396 command_runner.go:130] >           "cgroupWritable": false,
	I1213 10:31:05.831899  353396 command_runner.go:130] >           "cniConfDir": "",
	I1213 10:31:05.831905  353396 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1213 10:31:05.831910  353396 command_runner.go:130] >           "io_type": "",
	I1213 10:31:05.831919  353396 command_runner.go:130] >           "options": {
	I1213 10:31:05.831924  353396 command_runner.go:130] >             "BinaryName": "",
	I1213 10:31:05.831929  353396 command_runner.go:130] >             "CriuImagePath": "",
	I1213 10:31:05.831936  353396 command_runner.go:130] >             "CriuWorkPath": "",
	I1213 10:31:05.831940  353396 command_runner.go:130] >             "IoGid": 0,
	I1213 10:31:05.831948  353396 command_runner.go:130] >             "IoUid": 0,
	I1213 10:31:05.831953  353396 command_runner.go:130] >             "NoNewKeyring": false,
	I1213 10:31:05.831961  353396 command_runner.go:130] >             "Root": "",
	I1213 10:31:05.831965  353396 command_runner.go:130] >             "ShimCgroup": "",
	I1213 10:31:05.831970  353396 command_runner.go:130] >             "SystemdCgroup": false
	I1213 10:31:05.831992  353396 command_runner.go:130] >           },
	I1213 10:31:05.831998  353396 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1213 10:31:05.832004  353396 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1213 10:31:05.832011  353396 command_runner.go:130] >           "runtimePath": "",
	I1213 10:31:05.832017  353396 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1213 10:31:05.832025  353396 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1213 10:31:05.832030  353396 command_runner.go:130] >           "snapshotter": ""
	I1213 10:31:05.832037  353396 command_runner.go:130] >         }
	I1213 10:31:05.832040  353396 command_runner.go:130] >       }
	I1213 10:31:05.832043  353396 command_runner.go:130] >     },
	I1213 10:31:05.832055  353396 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1213 10:31:05.832065  353396 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1213 10:31:05.832073  353396 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1213 10:31:05.832081  353396 command_runner.go:130] >     "disableApparmor": false,
	I1213 10:31:05.832086  353396 command_runner.go:130] >     "disableHugetlbController": true,
	I1213 10:31:05.832093  353396 command_runner.go:130] >     "disableProcMount": false,
	I1213 10:31:05.832098  353396 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1213 10:31:05.832106  353396 command_runner.go:130] >     "enableCDI": true,
	I1213 10:31:05.832110  353396 command_runner.go:130] >     "enableSelinux": false,
	I1213 10:31:05.832118  353396 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1213 10:31:05.832123  353396 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1213 10:31:05.832131  353396 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1213 10:31:05.832135  353396 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1213 10:31:05.832140  353396 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1213 10:31:05.832144  353396 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1213 10:31:05.832151  353396 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1213 10:31:05.832157  353396 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1213 10:31:05.832165  353396 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1213 10:31:05.832171  353396 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1213 10:31:05.832180  353396 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1213 10:31:05.832185  353396 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1213 10:31:05.832192  353396 command_runner.go:130] >   },
	I1213 10:31:05.832195  353396 command_runner.go:130] >   "features": {
	I1213 10:31:05.832204  353396 command_runner.go:130] >     "supplemental_groups_policy": true
	I1213 10:31:05.832208  353396 command_runner.go:130] >   },
	I1213 10:31:05.832212  353396 command_runner.go:130] >   "golang": "go1.24.9",
	I1213 10:31:05.832222  353396 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 10:31:05.832235  353396 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 10:31:05.832240  353396 command_runner.go:130] >   "runtimeHandlers": [
	I1213 10:31:05.832245  353396 command_runner.go:130] >     {
	I1213 10:31:05.832248  353396 command_runner.go:130] >       "features": {
	I1213 10:31:05.832257  353396 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 10:31:05.832262  353396 command_runner.go:130] >         "user_namespaces": true
	I1213 10:31:05.832268  353396 command_runner.go:130] >       }
	I1213 10:31:05.832276  353396 command_runner.go:130] >     },
	I1213 10:31:05.832283  353396 command_runner.go:130] >     {
	I1213 10:31:05.832287  353396 command_runner.go:130] >       "features": {
	I1213 10:31:05.832295  353396 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 10:31:05.832299  353396 command_runner.go:130] >         "user_namespaces": true
	I1213 10:31:05.832302  353396 command_runner.go:130] >       },
	I1213 10:31:05.832307  353396 command_runner.go:130] >       "name": "runc"
	I1213 10:31:05.832310  353396 command_runner.go:130] >     }
	I1213 10:31:05.832313  353396 command_runner.go:130] >   ],
	I1213 10:31:05.832316  353396 command_runner.go:130] >   "status": {
	I1213 10:31:05.832320  353396 command_runner.go:130] >     "conditions": [
	I1213 10:31:05.832325  353396 command_runner.go:130] >       {
	I1213 10:31:05.832330  353396 command_runner.go:130] >         "message": "",
	I1213 10:31:05.832337  353396 command_runner.go:130] >         "reason": "",
	I1213 10:31:05.832344  353396 command_runner.go:130] >         "status": true,
	I1213 10:31:05.832354  353396 command_runner.go:130] >         "type": "RuntimeReady"
	I1213 10:31:05.832362  353396 command_runner.go:130] >       },
	I1213 10:31:05.832365  353396 command_runner.go:130] >       {
	I1213 10:31:05.832375  353396 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1213 10:31:05.832380  353396 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1213 10:31:05.832383  353396 command_runner.go:130] >         "status": false,
	I1213 10:31:05.832388  353396 command_runner.go:130] >         "type": "NetworkReady"
	I1213 10:31:05.832396  353396 command_runner.go:130] >       },
	I1213 10:31:05.832399  353396 command_runner.go:130] >       {
	I1213 10:31:05.832422  353396 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1213 10:31:05.832434  353396 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1213 10:31:05.832444  353396 command_runner.go:130] >         "status": false,
	I1213 10:31:05.832451  353396 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1213 10:31:05.832454  353396 command_runner.go:130] >       }
	I1213 10:31:05.832457  353396 command_runner.go:130] >     ]
	I1213 10:31:05.832461  353396 command_runner.go:130] >   }
	I1213 10:31:05.832463  353396 command_runner.go:130] > }
	I1213 10:31:05.834983  353396 cni.go:84] Creating CNI manager for ""
	I1213 10:31:05.835008  353396 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:31:05.835032  353396 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:31:05.835055  353396 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-652709 NodeName:functional-652709 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:31:05.835177  353396 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-652709"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:31:05.835253  353396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:31:05.843333  353396 command_runner.go:130] > kubeadm
	I1213 10:31:05.843355  353396 command_runner.go:130] > kubectl
	I1213 10:31:05.843360  353396 command_runner.go:130] > kubelet
	I1213 10:31:05.843375  353396 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:31:05.843451  353396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:31:05.851169  353396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 10:31:05.865230  353396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:31:05.877883  353396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 10:31:05.891827  353396 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:31:05.896023  353396 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 10:31:05.896126  353396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:31:06.037110  353396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:31:06.663693  353396 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709 for IP: 192.168.49.2
	I1213 10:31:06.663826  353396 certs.go:195] generating shared ca certs ...
	I1213 10:31:06.663858  353396 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:31:06.664061  353396 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 10:31:06.664135  353396 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 10:31:06.664169  353396 certs.go:257] generating profile certs ...
	I1213 10:31:06.664331  353396 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.key
	I1213 10:31:06.664442  353396 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key.86e7afd1
	I1213 10:31:06.664517  353396 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key
	I1213 10:31:06.664552  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 10:31:06.664592  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 10:31:06.664634  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 10:31:06.664671  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 10:31:06.664701  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 10:31:06.664745  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 10:31:06.664781  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 10:31:06.664811  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 10:31:06.664893  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 10:31:06.664965  353396 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 10:31:06.664999  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:31:06.665056  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:31:06.665113  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:31:06.665174  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 10:31:06.665258  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 10:31:06.665367  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> /usr/share/ca-certificates/3089152.pem
	I1213 10:31:06.665414  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:06.665453  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem -> /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.666083  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:31:06.686373  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:31:06.706393  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:31:06.727893  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:31:06.748376  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:31:06.769115  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:31:06.788184  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:31:06.807317  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:31:06.826240  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 10:31:06.845063  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:31:06.863130  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 10:31:06.881577  353396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:31:06.894536  353396 ssh_runner.go:195] Run: openssl version
	I1213 10:31:06.900741  353396 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 10:31:06.901231  353396 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.909107  353396 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 10:31:06.916518  353396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.920250  353396 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.920295  353396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.920347  353396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.961321  353396 command_runner.go:130] > 51391683
	I1213 10:31:06.961405  353396 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:31:06.969200  353396 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 10:31:06.976714  353396 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 10:31:06.984537  353396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 10:31:06.988716  353396 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 10:31:06.988763  353396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 10:31:06.988817  353396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 10:31:07.029862  353396 command_runner.go:130] > 3ec20f2e
	I1213 10:31:07.030284  353396 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:31:07.037958  353396 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:07.045451  353396 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:31:07.053144  353396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:07.056994  353396 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:07.057051  353396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:07.057104  353396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:07.097856  353396 command_runner.go:130] > b5213941
	I1213 10:31:07.098292  353396 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:31:07.106039  353396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:31:07.109917  353396 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:31:07.109945  353396 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 10:31:07.109953  353396 command_runner.go:130] > Device: 259,1	Inode: 3399222     Links: 1
	I1213 10:31:07.109960  353396 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:31:07.109966  353396 command_runner.go:130] > Access: 2025-12-13 10:26:59.103845116 +0000
	I1213 10:31:07.109971  353396 command_runner.go:130] > Modify: 2025-12-13 10:22:52.641441584 +0000
	I1213 10:31:07.109977  353396 command_runner.go:130] > Change: 2025-12-13 10:22:52.641441584 +0000
	I1213 10:31:07.109982  353396 command_runner.go:130] >  Birth: 2025-12-13 10:22:52.641441584 +0000
	I1213 10:31:07.110079  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:31:07.151277  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.151699  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:31:07.192420  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.192514  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:31:07.233686  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.233923  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:31:07.275302  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.275760  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:31:07.324799  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.325290  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:31:07.377047  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.377629  353396 kubeadm.go:401] StartCluster: {Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:31:07.377757  353396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 10:31:07.377843  353396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:31:07.405423  353396 cri.go:89] found id: ""
	I1213 10:31:07.405508  353396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:31:07.414529  353396 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 10:31:07.414595  353396 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 10:31:07.414615  353396 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 10:31:07.415690  353396 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:31:07.415743  353396 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:31:07.415805  353396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:31:07.423401  353396 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:31:07.423850  353396 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-652709" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:31:07.423998  353396 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-307042/kubeconfig needs updating (will repair): [kubeconfig missing "functional-652709" cluster setting kubeconfig missing "functional-652709" context setting]
	I1213 10:31:07.424313  353396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:31:07.424829  353396 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:31:07.425032  353396 kapi.go:59] client config for functional-652709: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.key", CAFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:31:07.425626  353396 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 10:31:07.425778  353396 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 10:31:07.425812  353396 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 10:31:07.425854  353396 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 10:31:07.425888  353396 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 10:31:07.425723  353396 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 10:31:07.426245  353396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:31:07.437887  353396 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 10:31:07.437960  353396 kubeadm.go:602] duration metric: took 22.197398ms to restartPrimaryControlPlane
	I1213 10:31:07.437984  353396 kubeadm.go:403] duration metric: took 60.362619ms to StartCluster
	I1213 10:31:07.438027  353396 settings.go:142] acquiring lock: {Name:mk079e9a25ebbc2c8fbae42d4c6ed096a652c00b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:31:07.438107  353396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:31:07.438874  353396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:31:07.439133  353396 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 10:31:07.439572  353396 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:31:07.439649  353396 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:31:07.439895  353396 addons.go:70] Setting storage-provisioner=true in profile "functional-652709"
	I1213 10:31:07.439924  353396 addons.go:239] Setting addon storage-provisioner=true in "functional-652709"
	I1213 10:31:07.440086  353396 host.go:66] Checking if "functional-652709" exists ...
	I1213 10:31:07.439942  353396 addons.go:70] Setting default-storageclass=true in profile "functional-652709"
	I1213 10:31:07.440166  353396 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-652709"
	I1213 10:31:07.440530  353396 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:31:07.440672  353396 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:31:07.445924  353396 out.go:179] * Verifying Kubernetes components...
	I1213 10:31:07.449291  353396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:31:07.477163  353396 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:31:07.477818  353396 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:31:07.477982  353396 kapi.go:59] client config for functional-652709: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.key", CAFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:31:07.478289  353396 addons.go:239] Setting addon default-storageclass=true in "functional-652709"
	I1213 10:31:07.478317  353396 host.go:66] Checking if "functional-652709" exists ...
	I1213 10:31:07.478815  353396 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:31:07.480787  353396 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:07.480804  353396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:31:07.480857  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:07.506052  353396 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:07.506074  353396 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:31:07.506149  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:07.532221  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:07.553427  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:07.654835  353396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:31:07.677297  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:07.691553  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:08.413950  353396 node_ready.go:35] waiting up to 6m0s for node "functional-652709" to be "Ready" ...
	I1213 10:31:08.414025  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:08.414055  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.414088  353396 retry.go:31] will retry after 345.496875ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.414094  353396 type.go:168] "Request Body" body=""
	I1213 10:31:08.414127  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:08.414139  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.414145  353396 retry.go:31] will retry after 223.686843ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.414166  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:08.414498  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:08.639014  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:08.708995  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:08.709048  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.709067  353396 retry.go:31] will retry after 375.63163ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.760277  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:08.818789  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:08.818835  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.818856  353396 retry.go:31] will retry after 406.416897ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.915066  353396 type.go:168] "Request Body" body=""
	I1213 10:31:08.915143  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:08.915484  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:09.084944  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:09.142294  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:09.145823  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.145856  353396 retry.go:31] will retry after 462.162588ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.226047  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:09.284957  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:09.285005  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.285029  353396 retry.go:31] will retry after 590.841892ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.414170  353396 type.go:168] "Request Body" body=""
	I1213 10:31:09.414270  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:09.414569  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:09.609047  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:09.669723  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:09.669808  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.669831  353396 retry.go:31] will retry after 579.936823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.876057  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:09.914654  353396 type.go:168] "Request Body" body=""
	I1213 10:31:09.914781  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:09.915113  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:09.958653  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:09.959319  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.959356  353396 retry.go:31] will retry after 607.747477ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:10.250896  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:10.320327  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:10.320375  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:10.320395  353396 retry.go:31] will retry after 1.522220042s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:10.414670  353396 type.go:168] "Request Body" body=""
	I1213 10:31:10.414776  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:10.415078  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:10.415128  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:10.567453  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:10.637133  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:10.637170  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:10.637192  353396 retry.go:31] will retry after 1.738217883s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:10.914619  353396 type.go:168] "Request Body" body=""
	I1213 10:31:10.914713  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:10.915040  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:11.414837  353396 type.go:168] "Request Body" body=""
	I1213 10:31:11.414916  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:11.415223  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:11.842893  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:11.907661  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:11.907696  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:11.907728  353396 retry.go:31] will retry after 2.533033731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:11.915037  353396 type.go:168] "Request Body" body=""
	I1213 10:31:11.915117  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:11.915423  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:12.376116  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:12.414883  353396 type.go:168] "Request Body" body=""
	I1213 10:31:12.414962  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:12.415244  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:12.415286  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:12.436301  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:12.440043  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:12.440078  353396 retry.go:31] will retry after 2.549851387s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:12.914750  353396 type.go:168] "Request Body" body=""
	I1213 10:31:12.914826  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:12.915091  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:13.414886  353396 type.go:168] "Request Body" body=""
	I1213 10:31:13.414964  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:13.415325  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:13.914980  353396 type.go:168] "Request Body" body=""
	I1213 10:31:13.915058  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:13.915431  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:14.414144  353396 type.go:168] "Request Body" body=""
	I1213 10:31:14.414226  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:14.414516  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:14.441795  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:14.521460  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:14.521500  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:14.521521  353396 retry.go:31] will retry after 3.212514963s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:14.915209  353396 type.go:168] "Request Body" body=""
	I1213 10:31:14.915291  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:14.915586  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:14.915630  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:14.990917  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:15.080462  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:15.084181  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:15.084216  353396 retry.go:31] will retry after 3.733369975s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:15.414758  353396 type.go:168] "Request Body" body=""
	I1213 10:31:15.414836  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:15.415124  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:15.914893  353396 type.go:168] "Request Body" body=""
	I1213 10:31:15.914962  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:15.915239  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:16.415068  353396 type.go:168] "Request Body" body=""
	I1213 10:31:16.415147  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:16.415460  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:16.914139  353396 type.go:168] "Request Body" body=""
	I1213 10:31:16.914218  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:16.914520  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:17.414166  353396 type.go:168] "Request Body" body=""
	I1213 10:31:17.414237  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:17.414497  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:17.414542  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:17.734589  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:17.791638  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:17.795431  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:17.795464  353396 retry.go:31] will retry after 2.280639456s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:17.914828  353396 type.go:168] "Request Body" body=""
	I1213 10:31:17.914907  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:17.915229  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:18.415056  353396 type.go:168] "Request Body" body=""
	I1213 10:31:18.415138  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:18.415477  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:18.817969  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:18.882172  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:18.882215  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:18.882235  353396 retry.go:31] will retry after 4.138686797s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:18.914321  353396 type.go:168] "Request Body" body=""
	I1213 10:31:18.914392  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:18.914663  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:19.414265  353396 type.go:168] "Request Body" body=""
	I1213 10:31:19.414351  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:19.414671  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:19.414743  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:19.914452  353396 type.go:168] "Request Body" body=""
	I1213 10:31:19.914532  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:19.914885  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:20.077334  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:20.142139  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:20.142182  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:20.142203  353396 retry.go:31] will retry after 8.217804099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:20.414481  353396 type.go:168] "Request Body" body=""
	I1213 10:31:20.414554  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:20.414845  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:20.914228  353396 type.go:168] "Request Body" body=""
	I1213 10:31:20.914302  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:20.914590  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:21.414310  353396 type.go:168] "Request Body" body=""
	I1213 10:31:21.414387  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:21.414748  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:21.414804  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:21.914112  353396 type.go:168] "Request Body" body=""
	I1213 10:31:21.914192  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:21.914465  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:22.414190  353396 type.go:168] "Request Body" body=""
	I1213 10:31:22.414276  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:22.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:22.914222  353396 type.go:168] "Request Body" body=""
	I1213 10:31:22.914304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:22.914654  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:23.021940  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:23.082413  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:23.086273  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:23.086307  353396 retry.go:31] will retry after 3.228749017s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:23.414853  353396 type.go:168] "Request Body" body=""
	I1213 10:31:23.414928  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:23.415204  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:23.415248  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:23.915086  353396 type.go:168] "Request Body" body=""
	I1213 10:31:23.915169  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:23.915500  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:24.414244  353396 type.go:168] "Request Body" body=""
	I1213 10:31:24.414323  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:24.414750  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:24.914140  353396 type.go:168] "Request Body" body=""
	I1213 10:31:24.914235  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:24.914512  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:25.414276  353396 type.go:168] "Request Body" body=""
	I1213 10:31:25.414350  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:25.414719  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:25.914418  353396 type.go:168] "Request Body" body=""
	I1213 10:31:25.914503  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:25.914851  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:25.914921  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:26.315317  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:26.370308  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:26.374436  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:26.374468  353396 retry.go:31] will retry after 6.181513775s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:26.414616  353396 type.go:168] "Request Body" body=""
	I1213 10:31:26.414702  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:26.414956  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:26.914223  353396 type.go:168] "Request Body" body=""
	I1213 10:31:26.914299  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:26.914631  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:27.414210  353396 type.go:168] "Request Body" body=""
	I1213 10:31:27.414287  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:27.414626  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:27.914667  353396 type.go:168] "Request Body" body=""
	I1213 10:31:27.914756  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:27.915024  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:27.915076  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:28.360839  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:28.414331  353396 type.go:168] "Request Body" body=""
	I1213 10:31:28.414406  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:28.414626  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:28.418709  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:28.418758  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:28.418778  353396 retry.go:31] will retry after 9.214302946s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:28.914367  353396 type.go:168] "Request Body" body=""
	I1213 10:31:28.914492  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:28.914860  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:29.414102  353396 type.go:168] "Request Body" body=""
	I1213 10:31:29.414175  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:29.414432  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:29.914164  353396 type.go:168] "Request Body" body=""
	I1213 10:31:29.914249  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:29.914544  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:30.414171  353396 type.go:168] "Request Body" body=""
	I1213 10:31:30.414256  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:30.414595  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:30.414647  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:30.914147  353396 type.go:168] "Request Body" body=""
	I1213 10:31:30.914252  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:30.914572  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:31.414262  353396 type.go:168] "Request Body" body=""
	I1213 10:31:31.414347  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:31.414732  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:31.914303  353396 type.go:168] "Request Body" body=""
	I1213 10:31:31.914387  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:31.914757  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:32.415148  353396 type.go:168] "Request Body" body=""
	I1213 10:31:32.415224  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:32.415495  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:32.415554  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:32.557021  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:32.617384  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:32.617431  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:32.617463  353396 retry.go:31] will retry after 16.934984193s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:32.914304  353396 type.go:168] "Request Body" body=""
	I1213 10:31:32.914388  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:32.914742  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:33.414206  353396 type.go:168] "Request Body" body=""
	I1213 10:31:33.414289  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:33.414637  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:33.914139  353396 type.go:168] "Request Body" body=""
	I1213 10:31:33.914219  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:33.914504  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:34.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:31:34.414324  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:34.414665  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:34.914262  353396 type.go:168] "Request Body" body=""
	I1213 10:31:34.914338  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:34.914682  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:34.914754  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:35.414981  353396 type.go:168] "Request Body" body=""
	I1213 10:31:35.415048  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:35.415330  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:35.915144  353396 type.go:168] "Request Body" body=""
	I1213 10:31:35.915224  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:35.915612  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:36.414208  353396 type.go:168] "Request Body" body=""
	I1213 10:31:36.414294  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:36.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:36.914192  353396 type.go:168] "Request Body" body=""
	I1213 10:31:36.914277  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:36.914578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:37.414237  353396 type.go:168] "Request Body" body=""
	I1213 10:31:37.414313  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:37.414629  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:37.414735  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:37.633334  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:37.695165  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:37.698650  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:37.698681  353396 retry.go:31] will retry after 9.333447966s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:37.915161  353396 type.go:168] "Request Body" body=""
	I1213 10:31:37.915240  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:37.915589  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:38.414123  353396 type.go:168] "Request Body" body=""
	I1213 10:31:38.414195  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:38.414520  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:38.914231  353396 type.go:168] "Request Body" body=""
	I1213 10:31:38.914310  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:38.914622  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:39.414370  353396 type.go:168] "Request Body" body=""
	I1213 10:31:39.414450  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:39.414771  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:39.414825  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:39.914151  353396 type.go:168] "Request Body" body=""
	I1213 10:31:39.914247  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:39.914597  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:40.414232  353396 type.go:168] "Request Body" body=""
	I1213 10:31:40.414305  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:40.414590  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:40.914274  353396 type.go:168] "Request Body" body=""
	I1213 10:31:40.914351  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:40.914714  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:41.414140  353396 type.go:168] "Request Body" body=""
	I1213 10:31:41.414213  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:41.414477  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:41.914180  353396 type.go:168] "Request Body" body=""
	I1213 10:31:41.914281  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:41.914609  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:41.914666  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:42.414204  353396 type.go:168] "Request Body" body=""
	I1213 10:31:42.414282  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:42.414600  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:42.914120  353396 type.go:168] "Request Body" body=""
	I1213 10:31:42.914194  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:42.914564  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:43.414289  353396 type.go:168] "Request Body" body=""
	I1213 10:31:43.414375  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:43.414737  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:43.914224  353396 type.go:168] "Request Body" body=""
	I1213 10:31:43.914304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:43.914640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:43.914712  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:44.414209  353396 type.go:168] "Request Body" body=""
	I1213 10:31:44.414295  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:44.414641  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:44.914233  353396 type.go:168] "Request Body" body=""
	I1213 10:31:44.914306  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:44.914657  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:45.414435  353396 type.go:168] "Request Body" body=""
	I1213 10:31:45.414551  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:45.414971  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:45.914734  353396 type.go:168] "Request Body" body=""
	I1213 10:31:45.914804  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:45.915154  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:45.915214  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:46.414939  353396 type.go:168] "Request Body" body=""
	I1213 10:31:46.415012  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:46.415313  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:46.915102  353396 type.go:168] "Request Body" body=""
	I1213 10:31:46.915186  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:46.915495  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:47.032831  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:47.089360  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:47.092850  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:47.092882  353396 retry.go:31] will retry after 14.257705184s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:47.414212  353396 type.go:168] "Request Body" body=""
	I1213 10:31:47.414287  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:47.414544  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:47.914676  353396 type.go:168] "Request Body" body=""
	I1213 10:31:47.914771  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:47.915126  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:48.414973  353396 type.go:168] "Request Body" body=""
	I1213 10:31:48.415048  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:48.415397  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:48.415453  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:48.914935  353396 type.go:168] "Request Body" body=""
	I1213 10:31:48.915016  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:48.915282  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:49.415024  353396 type.go:168] "Request Body" body=""
	I1213 10:31:49.415102  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:49.415400  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:49.552673  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:49.614333  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:49.614392  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:49.614413  353396 retry.go:31] will retry after 23.024485713s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:49.914879  353396 type.go:168] "Request Body" body=""
	I1213 10:31:49.914950  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:49.915276  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:50.415038  353396 type.go:168] "Request Body" body=""
	I1213 10:31:50.415112  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:50.415429  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:50.415489  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:50.914923  353396 type.go:168] "Request Body" body=""
	I1213 10:31:50.915005  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:50.915323  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:51.414987  353396 type.go:168] "Request Body" body=""
	I1213 10:31:51.415064  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:51.415444  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:51.915111  353396 type.go:168] "Request Body" body=""
	I1213 10:31:51.915192  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:51.915480  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:52.414202  353396 type.go:168] "Request Body" body=""
	I1213 10:31:52.414285  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:52.414620  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:52.914489  353396 type.go:168] "Request Body" body=""
	I1213 10:31:52.914562  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:52.914926  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:52.914988  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:53.414747  353396 type.go:168] "Request Body" body=""
	I1213 10:31:53.414820  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:53.415090  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:53.914866  353396 type.go:168] "Request Body" body=""
	I1213 10:31:53.914939  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:53.915273  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:54.415083  353396 type.go:168] "Request Body" body=""
	I1213 10:31:54.415160  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:54.415481  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:54.914141  353396 type.go:168] "Request Body" body=""
	I1213 10:31:54.914222  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:54.914536  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:55.414218  353396 type.go:168] "Request Body" body=""
	I1213 10:31:55.414293  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:55.414637  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:55.414730  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:55.914444  353396 type.go:168] "Request Body" body=""
	I1213 10:31:55.914529  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:55.914897  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:56.414701  353396 type.go:168] "Request Body" body=""
	I1213 10:31:56.414795  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:56.415073  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:56.914860  353396 type.go:168] "Request Body" body=""
	I1213 10:31:56.914937  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:56.915228  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:57.415017  353396 type.go:168] "Request Body" body=""
	I1213 10:31:57.415092  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:57.415406  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:57.415455  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:57.914498  353396 type.go:168] "Request Body" body=""
	I1213 10:31:57.914564  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:57.914847  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:58.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:31:58.414333  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:58.414679  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:58.914256  353396 type.go:168] "Request Body" body=""
	I1213 10:31:58.914332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:58.914624  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:59.414297  353396 type.go:168] "Request Body" body=""
	I1213 10:31:59.414370  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:59.414637  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:59.914203  353396 type.go:168] "Request Body" body=""
	I1213 10:31:59.914281  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:59.914613  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:59.914668  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:00.414938  353396 type.go:168] "Request Body" body=""
	I1213 10:32:00.415045  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:00.415391  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:00.914138  353396 type.go:168] "Request Body" body=""
	I1213 10:32:00.914218  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:00.914514  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:01.350855  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:32:01.414382  353396 type.go:168] "Request Body" body=""
	I1213 10:32:01.414452  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:01.414751  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:01.421471  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:01.421509  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:32:01.421528  353396 retry.go:31] will retry after 32.770422349s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:32:01.914172  353396 type.go:168] "Request Body" body=""
	I1213 10:32:01.914251  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:01.914603  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:02.414231  353396 type.go:168] "Request Body" body=""
	I1213 10:32:02.414337  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:02.414661  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:02.414753  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:02.914852  353396 type.go:168] "Request Body" body=""
	I1213 10:32:02.914942  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:02.915291  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:03.415068  353396 type.go:168] "Request Body" body=""
	I1213 10:32:03.415140  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:03.415560  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:03.914265  353396 type.go:168] "Request Body" body=""
	I1213 10:32:03.914365  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:03.914734  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:04.414487  353396 type.go:168] "Request Body" body=""
	I1213 10:32:04.414564  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:04.414920  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:04.414976  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:04.914751  353396 type.go:168] "Request Body" body=""
	I1213 10:32:04.914822  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:04.915267  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:05.415063  353396 type.go:168] "Request Body" body=""
	I1213 10:32:05.415138  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:05.415446  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:05.914158  353396 type.go:168] "Request Body" body=""
	I1213 10:32:05.914237  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:05.914537  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:06.414151  353396 type.go:168] "Request Body" body=""
	I1213 10:32:06.414241  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:06.414588  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:06.914189  353396 type.go:168] "Request Body" body=""
	I1213 10:32:06.914264  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:06.914626  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:06.914721  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:07.414237  353396 type.go:168] "Request Body" body=""
	I1213 10:32:07.414336  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:07.414675  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:07.914726  353396 type.go:168] "Request Body" body=""
	I1213 10:32:07.914801  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:07.915094  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:08.414945  353396 type.go:168] "Request Body" body=""
	I1213 10:32:08.415038  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:08.415395  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:08.914139  353396 type.go:168] "Request Body" body=""
	I1213 10:32:08.914221  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:08.914527  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:09.414118  353396 type.go:168] "Request Body" body=""
	I1213 10:32:09.414186  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:09.414531  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:09.414607  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:09.914205  353396 type.go:168] "Request Body" body=""
	I1213 10:32:09.914276  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:09.914632  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:10.414215  353396 type.go:168] "Request Body" body=""
	I1213 10:32:10.414292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:10.414629  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:10.914203  353396 type.go:168] "Request Body" body=""
	I1213 10:32:10.914292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:10.914593  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:11.414242  353396 type.go:168] "Request Body" body=""
	I1213 10:32:11.414348  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:11.414703  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:11.414757  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:11.914433  353396 type.go:168] "Request Body" body=""
	I1213 10:32:11.914511  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:11.914889  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:12.414571  353396 type.go:168] "Request Body" body=""
	I1213 10:32:12.414678  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:12.414978  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:12.639532  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:32:12.701723  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:12.701768  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:32:12.701788  353396 retry.go:31] will retry after 24.373252759s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:32:12.915117  353396 type.go:168] "Request Body" body=""
	I1213 10:32:12.915211  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:12.915511  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:13.414252  353396 type.go:168] "Request Body" body=""
	I1213 10:32:13.414325  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:13.414721  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:13.414794  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:13.914428  353396 type.go:168] "Request Body" body=""
	I1213 10:32:13.914518  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:13.914913  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:14.414265  353396 type.go:168] "Request Body" body=""
	I1213 10:32:14.414377  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:14.414786  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:14.914281  353396 type.go:168] "Request Body" body=""
	I1213 10:32:14.914360  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:14.914710  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:15.414252  353396 type.go:168] "Request Body" body=""
	I1213 10:32:15.414344  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:15.414630  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:15.914243  353396 type.go:168] "Request Body" body=""
	I1213 10:32:15.914331  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:15.914660  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:15.914750  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:16.414450  353396 type.go:168] "Request Body" body=""
	I1213 10:32:16.414531  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:16.414846  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:16.914165  353396 type.go:168] "Request Body" body=""
	I1213 10:32:16.914233  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:16.914541  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:17.414213  353396 type.go:168] "Request Body" body=""
	I1213 10:32:17.414341  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:17.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:17.914712  353396 type.go:168] "Request Body" body=""
	I1213 10:32:17.914803  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:17.915126  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:17.915184  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:18.414920  353396 type.go:168] "Request Body" body=""
	I1213 10:32:18.415009  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:18.415286  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:18.915162  353396 type.go:168] "Request Body" body=""
	I1213 10:32:18.915251  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:18.915598  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:19.414275  353396 type.go:168] "Request Body" body=""
	I1213 10:32:19.414357  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:19.414661  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:19.914425  353396 type.go:168] "Request Body" body=""
	I1213 10:32:19.914592  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:19.914937  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:20.414768  353396 type.go:168] "Request Body" body=""
	I1213 10:32:20.414852  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:20.415220  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:20.415278  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:20.915055  353396 type.go:168] "Request Body" body=""
	I1213 10:32:20.915156  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:20.915495  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:21.414184  353396 type.go:168] "Request Body" body=""
	I1213 10:32:21.414260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:21.414555  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:21.914250  353396 type.go:168] "Request Body" body=""
	I1213 10:32:21.914326  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:21.914677  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:22.414287  353396 type.go:168] "Request Body" body=""
	I1213 10:32:22.414370  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:22.414741  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:22.914735  353396 type.go:168] "Request Body" body=""
	I1213 10:32:22.914804  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:22.915060  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:22.915107  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:23.414877  353396 type.go:168] "Request Body" body=""
	I1213 10:32:23.414953  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:23.415252  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:23.915036  353396 type.go:168] "Request Body" body=""
	I1213 10:32:23.915115  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:23.915451  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:24.415135  353396 type.go:168] "Request Body" body=""
	I1213 10:32:24.415211  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:24.415473  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:24.914198  353396 type.go:168] "Request Body" body=""
	I1213 10:32:24.914282  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:24.914640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:25.414436  353396 type.go:168] "Request Body" body=""
	I1213 10:32:25.414514  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:25.414854  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:25.414914  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:25.914152  353396 type.go:168] "Request Body" body=""
	I1213 10:32:25.914219  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:25.914483  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:26.414214  353396 type.go:168] "Request Body" body=""
	I1213 10:32:26.414314  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:26.414636  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:26.914231  353396 type.go:168] "Request Body" body=""
	I1213 10:32:26.914307  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:26.914637  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:27.414336  353396 type.go:168] "Request Body" body=""
	I1213 10:32:27.414402  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:27.414666  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:27.914790  353396 type.go:168] "Request Body" body=""
	I1213 10:32:27.914883  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:27.915207  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:27.915256  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:28.414990  353396 type.go:168] "Request Body" body=""
	I1213 10:32:28.415074  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:28.415436  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:28.915099  353396 type.go:168] "Request Body" body=""
	I1213 10:32:28.915173  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:28.915437  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:29.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:32:29.414250  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:29.414561  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:29.914302  353396 type.go:168] "Request Body" body=""
	I1213 10:32:29.914399  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:29.914733  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:30.414157  353396 type.go:168] "Request Body" body=""
	I1213 10:32:30.414241  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:30.414552  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:30.414604  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:30.914233  353396 type.go:168] "Request Body" body=""
	I1213 10:32:30.914307  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:30.914656  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:31.414263  353396 type.go:168] "Request Body" body=""
	I1213 10:32:31.414357  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:31.414708  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:31.914203  353396 type.go:168] "Request Body" body=""
	I1213 10:32:31.914273  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:31.914531  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:32.414222  353396 type.go:168] "Request Body" body=""
	I1213 10:32:32.414304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:32.414640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:32.414727  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:32.914510  353396 type.go:168] "Request Body" body=""
	I1213 10:32:32.914599  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:32.914973  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:33.414825  353396 type.go:168] "Request Body" body=""
	I1213 10:32:33.414915  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:33.415280  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:33.915101  353396 type.go:168] "Request Body" body=""
	I1213 10:32:33.915178  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:33.915518  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:34.192937  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:32:34.265284  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:34.265320  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:34.265405  353396 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:32:34.414970  353396 type.go:168] "Request Body" body=""
	I1213 10:32:34.415052  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:34.415423  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:34.415491  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:34.914214  353396 type.go:168] "Request Body" body=""
	I1213 10:32:34.914301  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:34.914655  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:35.414268  353396 type.go:168] "Request Body" body=""
	I1213 10:32:35.414356  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:35.414678  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:35.914239  353396 type.go:168] "Request Body" body=""
	I1213 10:32:35.914322  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:35.914704  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:36.414407  353396 type.go:168] "Request Body" body=""
	I1213 10:32:36.414485  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:36.414823  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:36.914200  353396 type.go:168] "Request Body" body=""
	I1213 10:32:36.914292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:36.914625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:36.914719  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:37.076016  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:32:37.141132  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:37.141183  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:37.141286  353396 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:32:37.146231  353396 out.go:179] * Enabled addons: 
	I1213 10:32:37.149102  353396 addons.go:530] duration metric: took 1m29.709445532s for enable addons: enabled=[]
	I1213 10:32:37.414592  353396 type.go:168] "Request Body" body=""
	I1213 10:32:37.414736  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:37.415128  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:37.914163  353396 type.go:168] "Request Body" body=""
	I1213 10:32:37.914246  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:37.914580  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:38.414336  353396 type.go:168] "Request Body" body=""
	I1213 10:32:38.414415  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:38.414780  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:38.914239  353396 type.go:168] "Request Body" body=""
	I1213 10:32:38.914317  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:38.914675  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:38.914752  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:39.414390  353396 type.go:168] "Request Body" body=""
	I1213 10:32:39.414462  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:39.414811  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:39.914220  353396 type.go:168] "Request Body" body=""
	I1213 10:32:39.914296  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:39.914620  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:40.414231  353396 type.go:168] "Request Body" body=""
	I1213 10:32:40.414307  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:40.414622  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:40.914193  353396 type.go:168] "Request Body" body=""
	I1213 10:32:40.914271  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:40.914548  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:41.414251  353396 type.go:168] "Request Body" body=""
	I1213 10:32:41.414348  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:41.414708  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:41.414763  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:41.914229  353396 type.go:168] "Request Body" body=""
	I1213 10:32:41.914327  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:41.914643  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:42.414161  353396 type.go:168] "Request Body" body=""
	I1213 10:32:42.414248  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:42.414516  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:42.914567  353396 type.go:168] "Request Body" body=""
	I1213 10:32:42.914643  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:42.914974  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:43.414788  353396 type.go:168] "Request Body" body=""
	I1213 10:32:43.414863  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:43.415192  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:43.415248  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:43.915667  353396 type.go:168] "Request Body" body=""
	I1213 10:32:43.915743  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:43.916016  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:44.414833  353396 type.go:168] "Request Body" body=""
	I1213 10:32:44.414913  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:44.415264  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:44.915103  353396 type.go:168] "Request Body" body=""
	I1213 10:32:44.915182  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:44.915522  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:45.414185  353396 type.go:168] "Request Body" body=""
	I1213 10:32:45.414262  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:45.414578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:45.914231  353396 type.go:168] "Request Body" body=""
	I1213 10:32:45.914307  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:45.914655  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:45.914730  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:46.414248  353396 type.go:168] "Request Body" body=""
	I1213 10:32:46.414348  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:46.414706  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:46.914404  353396 type.go:168] "Request Body" body=""
	I1213 10:32:46.914482  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:46.914848  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:47.414245  353396 type.go:168] "Request Body" body=""
	I1213 10:32:47.414332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:47.414670  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:47.915115  353396 type.go:168] "Request Body" body=""
	I1213 10:32:47.915188  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:47.915496  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:47.915548  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:48.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:32:48.414231  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:48.414501  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:48.914202  353396 type.go:168] "Request Body" body=""
	I1213 10:32:48.914276  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:48.914656  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:49.414387  353396 type.go:168] "Request Body" body=""
	I1213 10:32:49.414468  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:49.414814  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:49.914540  353396 type.go:168] "Request Body" body=""
	I1213 10:32:49.914615  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:49.914986  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:50.414789  353396 type.go:168] "Request Body" body=""
	I1213 10:32:50.414867  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:50.415215  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:50.415272  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:50.915036  353396 type.go:168] "Request Body" body=""
	I1213 10:32:50.915111  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:50.915455  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:51.414115  353396 type.go:168] "Request Body" body=""
	I1213 10:32:51.414190  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:51.414454  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:51.914146  353396 type.go:168] "Request Body" body=""
	I1213 10:32:51.914227  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:51.914572  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:52.414290  353396 type.go:168] "Request Body" body=""
	I1213 10:32:52.414382  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:52.414734  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:52.914517  353396 type.go:168] "Request Body" body=""
	I1213 10:32:52.914591  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:52.914875  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:52.914926  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:53.414207  353396 type.go:168] "Request Body" body=""
	I1213 10:32:53.414292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:53.414618  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:53.914425  353396 type.go:168] "Request Body" body=""
	I1213 10:32:53.914515  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:53.914900  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:54.414164  353396 type.go:168] "Request Body" body=""
	I1213 10:32:54.414246  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:54.414585  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:54.915092  353396 type.go:168] "Request Body" body=""
	I1213 10:32:54.915167  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:54.915487  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:54.915545  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:55.414202  353396 type.go:168] "Request Body" body=""
	I1213 10:32:55.414280  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:55.414623  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:55.914337  353396 type.go:168] "Request Body" body=""
	I1213 10:32:55.914403  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:55.914665  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:56.415120  353396 type.go:168] "Request Body" body=""
	I1213 10:32:56.415206  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:56.415536  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:56.914238  353396 type.go:168] "Request Body" body=""
	I1213 10:32:56.914316  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:56.914647  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:57.414233  353396 type.go:168] "Request Body" body=""
	I1213 10:32:57.414314  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:57.414566  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:57.414610  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:57.914675  353396 type.go:168] "Request Body" body=""
	I1213 10:32:57.914760  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:57.915078  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:58.414843  353396 type.go:168] "Request Body" body=""
	I1213 10:32:58.414921  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:58.415260  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:58.914928  353396 type.go:168] "Request Body" body=""
	I1213 10:32:58.914994  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:58.915260  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:59.414997  353396 type.go:168] "Request Body" body=""
	I1213 10:32:59.415070  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:59.415409  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:59.415463  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:59.915087  353396 type.go:168] "Request Body" body=""
	I1213 10:32:59.915169  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:59.915509  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:00.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:33:00.414313  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:00.414605  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:00.914240  353396 type.go:168] "Request Body" body=""
	I1213 10:33:00.914313  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:00.914656  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:01.414407  353396 type.go:168] "Request Body" body=""
	I1213 10:33:01.414488  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:01.414812  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:01.914174  353396 type.go:168] "Request Body" body=""
	I1213 10:33:01.914242  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:01.914578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:01.914642  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:02.414226  353396 type.go:168] "Request Body" body=""
	I1213 10:33:02.414314  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:02.414834  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:02.914840  353396 type.go:168] "Request Body" body=""
	I1213 10:33:02.914918  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:02.915280  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:03.415005  353396 type.go:168] "Request Body" body=""
	I1213 10:33:03.415071  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:03.415330  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:03.915080  353396 type.go:168] "Request Body" body=""
	I1213 10:33:03.915153  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:03.915513  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:03.915572  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:04.414115  353396 type.go:168] "Request Body" body=""
	I1213 10:33:04.414198  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:04.414530  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:04.914186  353396 type.go:168] "Request Body" body=""
	I1213 10:33:04.914260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:04.914545  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:05.414244  353396 type.go:168] "Request Body" body=""
	I1213 10:33:05.414331  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:05.414650  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:05.914534  353396 type.go:168] "Request Body" body=""
	I1213 10:33:05.914636  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:05.915200  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:06.414237  353396 type.go:168] "Request Body" body=""
	I1213 10:33:06.414329  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:06.414755  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:06.414814  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:06.914236  353396 type.go:168] "Request Body" body=""
	I1213 10:33:06.914317  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:06.914747  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:07.414280  353396 type.go:168] "Request Body" body=""
	I1213 10:33:07.414359  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:07.414723  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:07.914678  353396 type.go:168] "Request Body" body=""
	I1213 10:33:07.914764  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:07.915020  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:08.414786  353396 type.go:168] "Request Body" body=""
	I1213 10:33:08.414861  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:08.415237  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:08.415311  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:08.914933  353396 type.go:168] "Request Body" body=""
	I1213 10:33:08.915016  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:08.915363  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:09.415090  353396 type.go:168] "Request Body" body=""
	I1213 10:33:09.415163  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:09.415497  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:09.914209  353396 type.go:168] "Request Body" body=""
	I1213 10:33:09.914284  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:09.914628  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:10.414238  353396 type.go:168] "Request Body" body=""
	I1213 10:33:10.414322  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:10.414661  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:10.914394  353396 type.go:168] "Request Body" body=""
	I1213 10:33:10.914479  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:10.914797  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:10.914865  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:11.414499  353396 type.go:168] "Request Body" body=""
	I1213 10:33:11.414573  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:11.414931  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:11.914532  353396 type.go:168] "Request Body" body=""
	I1213 10:33:11.914611  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:11.914966  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:12.414801  353396 type.go:168] "Request Body" body=""
	I1213 10:33:12.414868  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:12.415171  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:12.915004  353396 type.go:168] "Request Body" body=""
	I1213 10:33:12.915081  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:12.915417  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:12.915470  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:13.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:33:13.414242  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:13.414579  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:13.914271  353396 type.go:168] "Request Body" body=""
	I1213 10:33:13.914343  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:13.914614  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:14.414244  353396 type.go:168] "Request Body" body=""
	I1213 10:33:14.414327  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:14.414733  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:14.914296  353396 type.go:168] "Request Body" body=""
	I1213 10:33:14.914374  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:14.914755  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:15.414445  353396 type.go:168] "Request Body" body=""
	I1213 10:33:15.414516  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:15.414826  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:15.414874  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:15.914238  353396 type.go:168] "Request Body" body=""
	I1213 10:33:15.914315  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:15.914667  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:16.414229  353396 type.go:168] "Request Body" body=""
	I1213 10:33:16.414308  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:16.414633  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:16.914167  353396 type.go:168] "Request Body" body=""
	I1213 10:33:16.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:16.914576  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:17.414281  353396 type.go:168] "Request Body" body=""
	I1213 10:33:17.414356  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:17.414750  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:17.914808  353396 type.go:168] "Request Body" body=""
	I1213 10:33:17.914886  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:17.915216  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:17.915272  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:18.414973  353396 type.go:168] "Request Body" body=""
	I1213 10:33:18.415047  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:18.415307  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:18.915151  353396 type.go:168] "Request Body" body=""
	I1213 10:33:18.915226  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:18.915625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:19.414335  353396 type.go:168] "Request Body" body=""
	I1213 10:33:19.414419  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:19.414759  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:19.914166  353396 type.go:168] "Request Body" body=""
	I1213 10:33:19.914245  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:19.914568  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:20.414186  353396 type.go:168] "Request Body" body=""
	I1213 10:33:20.414272  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:20.414597  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:20.414654  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:20.914206  353396 type.go:168] "Request Body" body=""
	I1213 10:33:20.914282  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:20.914621  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:21.414132  353396 type.go:168] "Request Body" body=""
	I1213 10:33:21.414208  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:21.414499  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:21.914197  353396 type.go:168] "Request Body" body=""
	I1213 10:33:21.914272  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:21.914622  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:22.414213  353396 type.go:168] "Request Body" body=""
	I1213 10:33:22.414288  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:22.414631  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:22.414714  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:22.914531  353396 type.go:168] "Request Body" body=""
	I1213 10:33:22.914600  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:22.914881  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:23.414582  353396 type.go:168] "Request Body" body=""
	I1213 10:33:23.414669  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:23.415069  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:23.914895  353396 type.go:168] "Request Body" body=""
	I1213 10:33:23.914973  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:23.915336  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:24.415103  353396 type.go:168] "Request Body" body=""
	I1213 10:33:24.415180  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:24.415512  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:24.415578  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:24.914219  353396 type.go:168] "Request Body" body=""
	I1213 10:33:24.914295  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:24.914635  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:25.414345  353396 type.go:168] "Request Body" body=""
	I1213 10:33:25.414421  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:25.414761  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:25.914311  353396 type.go:168] "Request Body" body=""
	I1213 10:33:25.914420  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:25.914777  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:26.414223  353396 type.go:168] "Request Body" body=""
	I1213 10:33:26.414296  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:26.414589  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:26.914215  353396 type.go:168] "Request Body" body=""
	I1213 10:33:26.914296  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:26.914594  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:26.914643  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:27.414265  353396 type.go:168] "Request Body" body=""
	I1213 10:33:27.414341  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:27.414652  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:27.914800  353396 type.go:168] "Request Body" body=""
	I1213 10:33:27.914879  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:27.915203  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:28.415013  353396 type.go:168] "Request Body" body=""
	I1213 10:33:28.415091  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:28.415415  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:28.914142  353396 type.go:168] "Request Body" body=""
	I1213 10:33:28.914234  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:28.914563  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:29.415186  353396 type.go:168] "Request Body" body=""
	I1213 10:33:29.415270  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:29.415654  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:29.415711  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:29.914406  353396 type.go:168] "Request Body" body=""
	I1213 10:33:29.914489  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:29.914881  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:30.414106  353396 type.go:168] "Request Body" body=""
	I1213 10:33:30.414182  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:30.414441  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:30.914192  353396 type.go:168] "Request Body" body=""
	I1213 10:33:30.914277  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:30.914639  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:31.414415  353396 type.go:168] "Request Body" body=""
	I1213 10:33:31.414504  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:31.414860  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:31.914558  353396 type.go:168] "Request Body" body=""
	I1213 10:33:31.914652  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:31.915115  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:31.915181  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:32.414968  353396 type.go:168] "Request Body" body=""
	I1213 10:33:32.415066  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:32.415412  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:32.914133  353396 type.go:168] "Request Body" body=""
	I1213 10:33:32.914209  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:32.914503  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:33.414182  353396 type.go:168] "Request Body" body=""
	I1213 10:33:33.414252  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:33.414521  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:33.914275  353396 type.go:168] "Request Body" body=""
	I1213 10:33:33.914356  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:33.914658  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:34.414225  353396 type.go:168] "Request Body" body=""
	I1213 10:33:34.414306  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:34.414645  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:34.414731  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:34.915069  353396 type.go:168] "Request Body" body=""
	I1213 10:33:34.915139  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:34.915398  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:35.415186  353396 type.go:168] "Request Body" body=""
	I1213 10:33:35.415276  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:35.415605  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:35.914324  353396 type.go:168] "Request Body" body=""
	I1213 10:33:35.914403  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:35.914770  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:36.414183  353396 type.go:168] "Request Body" body=""
	I1213 10:33:36.414258  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:36.414589  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:36.914281  353396 type.go:168] "Request Body" body=""
	I1213 10:33:36.914356  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:36.914674  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:36.914747  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:37.414252  353396 type.go:168] "Request Body" body=""
	I1213 10:33:37.414329  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:37.414672  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:37.914724  353396 type.go:168] "Request Body" body=""
	I1213 10:33:37.914810  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:37.915057  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:38.414913  353396 type.go:168] "Request Body" body=""
	I1213 10:33:38.414995  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:38.415346  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:38.915031  353396 type.go:168] "Request Body" body=""
	I1213 10:33:38.915152  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:38.915474  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:38.915537  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:39.414149  353396 type.go:168] "Request Body" body=""
	I1213 10:33:39.414223  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:39.414489  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:39.914229  353396 type.go:168] "Request Body" body=""
	I1213 10:33:39.914316  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:39.914708  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:40.414421  353396 type.go:168] "Request Body" body=""
	I1213 10:33:40.414505  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:40.414841  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:40.914176  353396 type.go:168] "Request Body" body=""
	I1213 10:33:40.914251  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:40.914547  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:41.414225  353396 type.go:168] "Request Body" body=""
	I1213 10:33:41.414301  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:41.414638  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:41.414716  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:41.914232  353396 type.go:168] "Request Body" body=""
	I1213 10:33:41.914312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:41.914726  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:42.414413  353396 type.go:168] "Request Body" body=""
	I1213 10:33:42.414502  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:42.414788  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:42.914738  353396 type.go:168] "Request Body" body=""
	I1213 10:33:42.914819  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:42.915151  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:43.414956  353396 type.go:168] "Request Body" body=""
	I1213 10:33:43.415050  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:43.415390  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:43.415447  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:43.914096  353396 type.go:168] "Request Body" body=""
	I1213 10:33:43.914175  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:43.914452  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:44.414189  353396 type.go:168] "Request Body" body=""
	I1213 10:33:44.414299  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:44.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:44.914233  353396 type.go:168] "Request Body" body=""
	I1213 10:33:44.914313  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:44.914681  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:45.414148  353396 type.go:168] "Request Body" body=""
	I1213 10:33:45.414250  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:45.414576  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:45.914408  353396 type.go:168] "Request Body" body=""
	I1213 10:33:45.914483  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:45.914847  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:45.914902  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:46.414598  353396 type.go:168] "Request Body" body=""
	I1213 10:33:46.414675  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:46.415085  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:46.914922  353396 type.go:168] "Request Body" body=""
	I1213 10:33:46.915000  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:46.915300  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:47.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:33:47.414249  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:47.414598  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:47.914753  353396 type.go:168] "Request Body" body=""
	I1213 10:33:47.914829  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:47.915132  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:47.915181  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:48.414845  353396 type.go:168] "Request Body" body=""
	I1213 10:33:48.414950  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:48.415268  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:48.914972  353396 type.go:168] "Request Body" body=""
	I1213 10:33:48.915042  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:48.915396  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:49.415067  353396 type.go:168] "Request Body" body=""
	I1213 10:33:49.415147  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:49.415484  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:49.914172  353396 type.go:168] "Request Body" body=""
	I1213 10:33:49.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:49.914579  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:50.414275  353396 type.go:168] "Request Body" body=""
	I1213 10:33:50.414359  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:50.414679  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:50.414752  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:50.914234  353396 type.go:168] "Request Body" body=""
	I1213 10:33:50.914312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:50.914673  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:51.414220  353396 type.go:168] "Request Body" body=""
	I1213 10:33:51.414286  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:51.414593  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:51.914230  353396 type.go:168] "Request Body" body=""
	I1213 10:33:51.914311  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:51.914660  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:52.414409  353396 type.go:168] "Request Body" body=""
	I1213 10:33:52.414499  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:52.414831  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:52.414892  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:52.914704  353396 type.go:168] "Request Body" body=""
	I1213 10:33:52.914782  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:52.915049  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:53.414824  353396 type.go:168] "Request Body" body=""
	I1213 10:33:53.414900  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:53.415223  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:53.915049  353396 type.go:168] "Request Body" body=""
	I1213 10:33:53.915127  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:53.915475  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:54.415020  353396 type.go:168] "Request Body" body=""
	I1213 10:33:54.415131  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:54.415393  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:54.415434  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:54.914119  353396 type.go:168] "Request Body" body=""
	I1213 10:33:54.914214  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:54.914516  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:55.414234  353396 type.go:168] "Request Body" body=""
	I1213 10:33:55.414310  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:55.414632  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:55.914184  353396 type.go:168] "Request Body" body=""
	I1213 10:33:55.914266  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:55.914529  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:56.414289  353396 type.go:168] "Request Body" body=""
	I1213 10:33:56.414370  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:56.414757  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:56.914479  353396 type.go:168] "Request Body" body=""
	I1213 10:33:56.914560  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:56.914914  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:56.914974  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:57.414182  353396 type.go:168] "Request Body" body=""
	I1213 10:33:57.414256  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:57.414574  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:57.914733  353396 type.go:168] "Request Body" body=""
	I1213 10:33:57.914817  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:57.915173  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:58.414963  353396 type.go:168] "Request Body" body=""
	I1213 10:33:58.415038  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:58.415384  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:58.915098  353396 type.go:168] "Request Body" body=""
	I1213 10:33:58.915166  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:58.915457  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:58.915498  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:59.414217  353396 type.go:168] "Request Body" body=""
	I1213 10:33:59.414293  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:59.414619  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:59.914358  353396 type.go:168] "Request Body" body=""
	I1213 10:33:59.914442  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:59.914849  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:00.414213  353396 type.go:168] "Request Body" body=""
	I1213 10:34:00.414339  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:00.414709  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:00.914229  353396 type.go:168] "Request Body" body=""
	I1213 10:34:00.914306  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:00.914634  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:01.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:34:01.414315  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:01.414624  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:01.414672  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:01.914168  353396 type.go:168] "Request Body" body=""
	I1213 10:34:01.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:01.914585  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:02.414240  353396 type.go:168] "Request Body" body=""
	I1213 10:34:02.414320  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:02.414671  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:02.914495  353396 type.go:168] "Request Body" body=""
	I1213 10:34:02.914572  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:02.914905  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:03.414563  353396 type.go:168] "Request Body" body=""
	I1213 10:34:03.414642  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:03.414937  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:03.414981  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:03.914802  353396 type.go:168] "Request Body" body=""
	I1213 10:34:03.914886  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:03.915200  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:04.415061  353396 type.go:168] "Request Body" body=""
	I1213 10:34:04.415173  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:04.415604  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:04.915045  353396 type.go:168] "Request Body" body=""
	I1213 10:34:04.915117  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:04.915454  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:05.414181  353396 type.go:168] "Request Body" body=""
	I1213 10:34:05.414260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:05.414598  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:05.914312  353396 type.go:168] "Request Body" body=""
	I1213 10:34:05.914397  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:05.914761  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:05.914818  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:06.414172  353396 type.go:168] "Request Body" body=""
	I1213 10:34:06.414246  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:06.414578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:06.914215  353396 type.go:168] "Request Body" body=""
	I1213 10:34:06.914294  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:06.914638  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:07.414373  353396 type.go:168] "Request Body" body=""
	I1213 10:34:07.414449  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:07.414801  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:07.914926  353396 type.go:168] "Request Body" body=""
	I1213 10:34:07.914993  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:07.915307  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:07.915360  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:08.415127  353396 type.go:168] "Request Body" body=""
	I1213 10:34:08.415205  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:08.415596  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:08.914374  353396 type.go:168] "Request Body" body=""
	I1213 10:34:08.914456  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:08.914801  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:09.414148  353396 type.go:168] "Request Body" body=""
	I1213 10:34:09.414219  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:09.414479  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:09.914227  353396 type.go:168] "Request Body" body=""
	I1213 10:34:09.914306  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:09.914661  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:10.414240  353396 type.go:168] "Request Body" body=""
	I1213 10:34:10.414319  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:10.414680  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:10.414778  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:10.918812  353396 type.go:168] "Request Body" body=""
	I1213 10:34:10.918890  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:10.919160  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:11.415030  353396 type.go:168] "Request Body" body=""
	I1213 10:34:11.415107  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:11.415436  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:11.914150  353396 type.go:168] "Request Body" body=""
	I1213 10:34:11.914232  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:11.914571  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:12.415071  353396 type.go:168] "Request Body" body=""
	I1213 10:34:12.415146  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:12.415421  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:12.415479  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:12.914213  353396 type.go:168] "Request Body" body=""
	I1213 10:34:12.914288  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:12.914622  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:13.414338  353396 type.go:168] "Request Body" body=""
	I1213 10:34:13.414421  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:13.414784  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:13.914194  353396 type.go:168] "Request Body" body=""
	I1213 10:34:13.914270  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:13.914538  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:14.414217  353396 type.go:168] "Request Body" body=""
	I1213 10:34:14.414294  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:14.414624  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:14.914210  353396 type.go:168] "Request Body" body=""
	I1213 10:34:14.914290  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:14.914590  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:14.914639  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:15.414121  353396 type.go:168] "Request Body" body=""
	I1213 10:34:15.414260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:15.414569  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:15.914203  353396 type.go:168] "Request Body" body=""
	I1213 10:34:15.914284  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:15.914613  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:16.414225  353396 type.go:168] "Request Body" body=""
	I1213 10:34:16.414308  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:16.414648  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:16.914359  353396 type.go:168] "Request Body" body=""
	I1213 10:34:16.914447  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:16.914753  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:16.914798  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:17.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:34:17.414312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:17.414646  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:17.914569  353396 type.go:168] "Request Body" body=""
	I1213 10:34:17.914646  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:17.914997  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:18.414794  353396 type.go:168] "Request Body" body=""
	I1213 10:34:18.414864  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:18.415130  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:18.914878  353396 type.go:168] "Request Body" body=""
	I1213 10:34:18.914956  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:18.915256  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:18.915309  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:19.415048  353396 type.go:168] "Request Body" body=""
	I1213 10:34:19.415124  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:19.415473  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:19.914155  353396 type.go:168] "Request Body" body=""
	I1213 10:34:19.914239  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:19.914557  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:20.414216  353396 type.go:168] "Request Body" body=""
	I1213 10:34:20.414293  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:20.414595  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:20.914298  353396 type.go:168] "Request Body" body=""
	I1213 10:34:20.914378  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:20.914742  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:21.414175  353396 type.go:168] "Request Body" body=""
	I1213 10:34:21.414247  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:21.414574  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:21.414628  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:21.914278  353396 type.go:168] "Request Body" body=""
	I1213 10:34:21.914361  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:21.914745  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:22.414284  353396 type.go:168] "Request Body" body=""
	I1213 10:34:22.414361  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:22.414747  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:22.914549  353396 type.go:168] "Request Body" body=""
	I1213 10:34:22.914626  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:22.914988  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:23.414779  353396 type.go:168] "Request Body" body=""
	I1213 10:34:23.414855  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:23.415214  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:23.415277  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:23.915088  353396 type.go:168] "Request Body" body=""
	I1213 10:34:23.915170  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:23.915507  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:24.414168  353396 type.go:168] "Request Body" body=""
	I1213 10:34:24.414241  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:24.414497  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:24.914176  353396 type.go:168] "Request Body" body=""
	I1213 10:34:24.914250  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:24.914580  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:25.414317  353396 type.go:168] "Request Body" body=""
	I1213 10:34:25.414397  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:25.414758  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:25.914443  353396 type.go:168] "Request Body" body=""
	I1213 10:34:25.914516  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:25.914878  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:25.914936  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:26.414193  353396 type.go:168] "Request Body" body=""
	I1213 10:34:26.414269  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:26.414575  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:26.914218  353396 type.go:168] "Request Body" body=""
	I1213 10:34:26.914293  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:26.914611  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:27.414157  353396 type.go:168] "Request Body" body=""
	I1213 10:34:27.414224  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:27.414475  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:27.914651  353396 type.go:168] "Request Body" body=""
	I1213 10:34:27.914747  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:27.915082  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:27.915143  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:28.414747  353396 type.go:168] "Request Body" body=""
	I1213 10:34:28.414831  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:28.415166  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:28.914918  353396 type.go:168] "Request Body" body=""
	I1213 10:34:28.914994  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:28.915317  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:29.415099  353396 type.go:168] "Request Body" body=""
	I1213 10:34:29.415182  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:29.415527  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:29.914143  353396 type.go:168] "Request Body" body=""
	I1213 10:34:29.914235  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:29.914632  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:30.414347  353396 type.go:168] "Request Body" body=""
	I1213 10:34:30.414415  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:30.414708  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:30.414755  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:30.914237  353396 type.go:168] "Request Body" body=""
	I1213 10:34:30.914320  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:30.914657  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:31.414414  353396 type.go:168] "Request Body" body=""
	I1213 10:34:31.414503  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:31.414889  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:31.914157  353396 type.go:168] "Request Body" body=""
	I1213 10:34:31.914230  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:31.914496  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:32.414209  353396 type.go:168] "Request Body" body=""
	I1213 10:34:32.414292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:32.414648  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:32.914128  353396 type.go:168] "Request Body" body=""
	I1213 10:34:32.914211  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:32.914560  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:32.914616  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:33.414256  353396 type.go:168] "Request Body" body=""
	I1213 10:34:33.414326  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:33.414617  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:33.914297  353396 type.go:168] "Request Body" body=""
	I1213 10:34:33.914377  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:33.914762  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:34.414238  353396 type.go:168] "Request Body" body=""
	I1213 10:34:34.414315  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:34.414643  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:34.914151  353396 type.go:168] "Request Body" body=""
	I1213 10:34:34.914224  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:34.914486  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:35.414223  353396 type.go:168] "Request Body" body=""
	I1213 10:34:35.414304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:35.414642  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:35.414735  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:35.914235  353396 type.go:168] "Request Body" body=""
	I1213 10:34:35.914320  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:35.914658  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:36.414261  353396 type.go:168] "Request Body" body=""
	I1213 10:34:36.414332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:36.414605  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:36.914211  353396 type.go:168] "Request Body" body=""
	I1213 10:34:36.914285  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:36.914640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:37.414211  353396 type.go:168] "Request Body" body=""
	I1213 10:34:37.414289  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:37.414584  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:37.914675  353396 type.go:168] "Request Body" body=""
	I1213 10:34:37.914757  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:37.915023  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:37.915064  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:38.414903  353396 type.go:168] "Request Body" body=""
	I1213 10:34:38.414986  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:38.415396  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:38.914137  353396 type.go:168] "Request Body" body=""
	I1213 10:34:38.914223  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:38.914580  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:39.414172  353396 type.go:168] "Request Body" body=""
	I1213 10:34:39.414253  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:39.414582  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:39.914286  353396 type.go:168] "Request Body" body=""
	I1213 10:34:39.914363  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:39.914715  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:40.414232  353396 type.go:168] "Request Body" body=""
	I1213 10:34:40.414314  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:40.414677  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:40.414753  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:40.914094  353396 type.go:168] "Request Body" body=""
	I1213 10:34:40.914175  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:40.914491  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:41.414243  353396 type.go:168] "Request Body" body=""
	I1213 10:34:41.414321  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:41.414666  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:41.914412  353396 type.go:168] "Request Body" body=""
	I1213 10:34:41.914495  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:41.914870  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:42.414297  353396 type.go:168] "Request Body" body=""
	I1213 10:34:42.414371  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:42.414633  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:42.914585  353396 type.go:168] "Request Body" body=""
	I1213 10:34:42.914668  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:42.915024  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:42.915079  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:43.414607  353396 type.go:168] "Request Body" body=""
	I1213 10:34:43.414702  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:43.415071  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:43.914792  353396 type.go:168] "Request Body" body=""
	I1213 10:34:43.914869  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:43.915208  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:44.415017  353396 type.go:168] "Request Body" body=""
	I1213 10:34:44.415093  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:44.415470  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:44.915253  353396 type.go:168] "Request Body" body=""
	I1213 10:34:44.915329  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:44.915668  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:44.915722  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:45.414372  353396 type.go:168] "Request Body" body=""
	I1213 10:34:45.414449  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:45.414746  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:45.914236  353396 type.go:168] "Request Body" body=""
	I1213 10:34:45.914316  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:45.914655  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:46.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:34:46.414322  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:46.414658  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:46.915158  353396 type.go:168] "Request Body" body=""
	I1213 10:34:46.915231  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:46.915495  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:47.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:34:47.414242  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:47.414552  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:47.414603  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:47.914533  353396 type.go:168] "Request Body" body=""
	I1213 10:34:47.914615  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:47.914992  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:48.414726  353396 type.go:168] "Request Body" body=""
	I1213 10:34:48.414795  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:48.415059  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:48.914847  353396 type.go:168] "Request Body" body=""
	I1213 10:34:48.914935  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:48.915268  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:49.415068  353396 type.go:168] "Request Body" body=""
	I1213 10:34:49.415159  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:49.415526  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:49.415582  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:49.914165  353396 type.go:168] "Request Body" body=""
	I1213 10:34:49.914239  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:49.914499  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:50.414183  353396 type.go:168] "Request Body" body=""
	I1213 10:34:50.414258  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:50.414554  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:50.914233  353396 type.go:168] "Request Body" body=""
	I1213 10:34:50.914307  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:50.914623  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:51.414141  353396 type.go:168] "Request Body" body=""
	I1213 10:34:51.414231  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:51.414525  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:51.914233  353396 type.go:168] "Request Body" body=""
	I1213 10:34:51.914311  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:51.914675  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:51.914750  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:52.414266  353396 type.go:168] "Request Body" body=""
	I1213 10:34:52.414347  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:52.414711  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:52.914454  353396 type.go:168] "Request Body" body=""
	I1213 10:34:52.914525  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:52.914819  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:53.414527  353396 type.go:168] "Request Body" body=""
	I1213 10:34:53.414603  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:53.414939  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:53.914755  353396 type.go:168] "Request Body" body=""
	I1213 10:34:53.914832  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:53.915171  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:53.915227  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:54.414953  353396 type.go:168] "Request Body" body=""
	I1213 10:34:54.415021  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:54.415337  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:54.915118  353396 type.go:168] "Request Body" body=""
	I1213 10:34:54.915194  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:54.915521  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:55.414227  353396 type.go:168] "Request Body" body=""
	I1213 10:34:55.414310  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:55.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:55.914335  353396 type.go:168] "Request Body" body=""
	I1213 10:34:55.914406  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:55.914763  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:56.414221  353396 type.go:168] "Request Body" body=""
	I1213 10:34:56.414295  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:56.414641  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:56.414726  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:56.914215  353396 type.go:168] "Request Body" body=""
	I1213 10:34:56.914290  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:56.914634  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:57.415117  353396 type.go:168] "Request Body" body=""
	I1213 10:34:57.415188  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:57.415448  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:57.914627  353396 type.go:168] "Request Body" body=""
	I1213 10:34:57.914722  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:57.915055  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:58.414842  353396 type.go:168] "Request Body" body=""
	I1213 10:34:58.414915  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:58.415239  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:58.415298  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:58.915010  353396 type.go:168] "Request Body" body=""
	I1213 10:34:58.915077  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:58.915339  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:59.414106  353396 type.go:168] "Request Body" body=""
	I1213 10:34:59.414182  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:59.414535  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:59.914218  353396 type.go:168] "Request Body" body=""
	I1213 10:34:59.914297  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:59.914630  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:00.414234  353396 type.go:168] "Request Body" body=""
	I1213 10:35:00.414329  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:00.414642  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:00.914218  353396 type.go:168] "Request Body" body=""
	I1213 10:35:00.914294  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:00.914620  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:00.914708  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:01.414284  353396 type.go:168] "Request Body" body=""
	I1213 10:35:01.414392  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:01.414774  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:01.914191  353396 type.go:168] "Request Body" body=""
	I1213 10:35:01.914265  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:01.914561  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:02.414257  353396 type.go:168] "Request Body" body=""
	I1213 10:35:02.414340  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:02.414636  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:02.914575  353396 type.go:168] "Request Body" body=""
	I1213 10:35:02.914650  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:02.914985  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:02.915031  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:03.414733  353396 type.go:168] "Request Body" body=""
	I1213 10:35:03.414804  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:03.415061  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:03.914909  353396 type.go:168] "Request Body" body=""
	I1213 10:35:03.914993  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:03.915318  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:04.415148  353396 type.go:168] "Request Body" body=""
	I1213 10:35:04.415227  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:04.415569  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:04.914256  353396 type.go:168] "Request Body" body=""
	I1213 10:35:04.914332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:04.914597  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:05.414227  353396 type.go:168] "Request Body" body=""
	I1213 10:35:05.414308  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:05.414640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:05.414721  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:05.914245  353396 type.go:168] "Request Body" body=""
	I1213 10:35:05.914320  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:05.914676  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:06.414484  353396 type.go:168] "Request Body" body=""
	I1213 10:35:06.414568  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:06.415045  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:06.914814  353396 type.go:168] "Request Body" body=""
	I1213 10:35:06.914901  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:06.915246  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:07.415065  353396 type.go:168] "Request Body" body=""
	I1213 10:35:07.415153  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:07.415494  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:07.415553  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:07.914641  353396 type.go:168] "Request Body" body=""
	I1213 10:35:07.914776  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:07.915128  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:08.414792  353396 type.go:168] "Request Body" body=""
	I1213 10:35:08.414868  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:08.415229  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:08.914906  353396 type.go:168] "Request Body" body=""
	I1213 10:35:08.914987  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:08.915375  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:09.415114  353396 type.go:168] "Request Body" body=""
	I1213 10:35:09.415185  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:09.415534  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:09.415626  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:09.914398  353396 type.go:168] "Request Body" body=""
	I1213 10:35:09.914476  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:09.914888  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:10.414634  353396 type.go:168] "Request Body" body=""
	I1213 10:35:10.414730  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:10.415080  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:10.914849  353396 type.go:168] "Request Body" body=""
	I1213 10:35:10.914926  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:10.915192  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:11.414986  353396 type.go:168] "Request Body" body=""
	I1213 10:35:11.415062  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:11.415419  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:11.915136  353396 type.go:168] "Request Body" body=""
	I1213 10:35:11.915218  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:11.915577  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:11.915629  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:12.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:35:12.414245  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:12.414563  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:12.914542  353396 type.go:168] "Request Body" body=""
	I1213 10:35:12.914628  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:12.914969  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:13.414794  353396 type.go:168] "Request Body" body=""
	I1213 10:35:13.414874  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:13.415199  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:13.914951  353396 type.go:168] "Request Body" body=""
	I1213 10:35:13.915028  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:13.915309  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:14.415142  353396 type.go:168] "Request Body" body=""
	I1213 10:35:14.415220  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:14.415591  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:14.415644  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:14.914209  353396 type.go:168] "Request Body" body=""
	I1213 10:35:14.914291  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:14.914640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:15.414142  353396 type.go:168] "Request Body" body=""
	I1213 10:35:15.414223  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:15.414500  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:15.914207  353396 type.go:168] "Request Body" body=""
	I1213 10:35:15.914282  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:15.914614  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:16.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:35:16.414321  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:16.414682  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:16.914393  353396 type.go:168] "Request Body" body=""
	I1213 10:35:16.914470  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:16.914765  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:16.914810  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:17.414477  353396 type.go:168] "Request Body" body=""
	I1213 10:35:17.414566  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:17.414955  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:17.914879  353396 type.go:168] "Request Body" body=""
	I1213 10:35:17.914965  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:17.915283  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:18.414960  353396 type.go:168] "Request Body" body=""
	I1213 10:35:18.415027  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:18.415288  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:18.915145  353396 type.go:168] "Request Body" body=""
	I1213 10:35:18.915219  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:18.915532  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:18.915589  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:19.414267  353396 type.go:168] "Request Body" body=""
	I1213 10:35:19.414349  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:19.414667  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:19.914171  353396 type.go:168] "Request Body" body=""
	I1213 10:35:19.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:19.914602  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:20.414244  353396 type.go:168] "Request Body" body=""
	I1213 10:35:20.414319  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:20.414679  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:20.914267  353396 type.go:168] "Request Body" body=""
	I1213 10:35:20.914345  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:20.914701  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:21.414378  353396 type.go:168] "Request Body" body=""
	I1213 10:35:21.414447  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:21.414730  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:21.414775  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:21.914224  353396 type.go:168] "Request Body" body=""
	I1213 10:35:21.914303  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:21.914653  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:22.414385  353396 type.go:168] "Request Body" body=""
	I1213 10:35:22.414469  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:22.414833  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:22.914649  353396 type.go:168] "Request Body" body=""
	I1213 10:35:22.914736  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:22.915061  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:23.414837  353396 type.go:168] "Request Body" body=""
	I1213 10:35:23.414918  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:23.415270  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:23.415331  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:23.915146  353396 type.go:168] "Request Body" body=""
	I1213 10:35:23.915249  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:23.915638  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:24.414154  353396 type.go:168] "Request Body" body=""
	I1213 10:35:24.414230  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:24.414552  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:24.914229  353396 type.go:168] "Request Body" body=""
	I1213 10:35:24.914318  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:24.914653  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:25.414252  353396 type.go:168] "Request Body" body=""
	I1213 10:35:25.414330  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:25.414660  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:25.915091  353396 type.go:168] "Request Body" body=""
	I1213 10:35:25.915163  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:25.915423  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:25.915467  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:26.414123  353396 type.go:168] "Request Body" body=""
	I1213 10:35:26.414207  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:26.414558  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:26.914267  353396 type.go:168] "Request Body" body=""
	I1213 10:35:26.914346  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:26.914677  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:27.414161  353396 type.go:168] "Request Body" body=""
	I1213 10:35:27.414230  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:27.414491  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:27.914568  353396 type.go:168] "Request Body" body=""
	I1213 10:35:27.914650  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:27.914976  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:28.414794  353396 type.go:168] "Request Body" body=""
	I1213 10:35:28.414873  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:28.415186  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:28.415239  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:28.914787  353396 type.go:168] "Request Body" body=""
	I1213 10:35:28.914856  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:28.915120  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:29.414926  353396 type.go:168] "Request Body" body=""
	I1213 10:35:29.415009  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:29.415380  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:29.915155  353396 type.go:168] "Request Body" body=""
	I1213 10:35:29.915232  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:29.915572  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:30.414139  353396 type.go:168] "Request Body" body=""
	I1213 10:35:30.414216  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:30.414501  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:30.914198  353396 type.go:168] "Request Body" body=""
	I1213 10:35:30.914283  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:30.914624  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:30.914682  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:31.414209  353396 type.go:168] "Request Body" body=""
	I1213 10:35:31.414294  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:31.414641  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:31.915066  353396 type.go:168] "Request Body" body=""
	I1213 10:35:31.915136  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:31.915397  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:32.415120  353396 type.go:168] "Request Body" body=""
	I1213 10:35:32.415192  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:32.415523  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:32.914353  353396 type.go:168] "Request Body" body=""
	I1213 10:35:32.914437  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:32.914779  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:32.914844  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:33.415110  353396 type.go:168] "Request Body" body=""
	I1213 10:35:33.415191  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:33.415482  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:33.914210  353396 type.go:168] "Request Body" body=""
	I1213 10:35:33.914292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:33.914627  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:34.414260  353396 type.go:168] "Request Body" body=""
	I1213 10:35:34.414342  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:34.414742  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:34.914167  353396 type.go:168] "Request Body" body=""
	I1213 10:35:34.914242  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:34.914556  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:35.414226  353396 type.go:168] "Request Body" body=""
	I1213 10:35:35.414313  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:35.414666  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:35.414752  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:35.914420  353396 type.go:168] "Request Body" body=""
	I1213 10:35:35.914498  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:35.914834  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:36.414154  353396 type.go:168] "Request Body" body=""
	I1213 10:35:36.414226  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:36.414499  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:36.914218  353396 type.go:168] "Request Body" body=""
	I1213 10:35:36.914304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:36.914676  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:37.414405  353396 type.go:168] "Request Body" body=""
	I1213 10:35:37.414482  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:37.414832  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:37.414887  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:37.914713  353396 type.go:168] "Request Body" body=""
	I1213 10:35:37.914786  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:37.915049  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:38.414865  353396 type.go:168] "Request Body" body=""
	I1213 10:35:38.414946  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:38.415313  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:38.915124  353396 type.go:168] "Request Body" body=""
	I1213 10:35:38.915206  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:38.915515  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:39.414199  353396 type.go:168] "Request Body" body=""
	I1213 10:35:39.414277  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:39.414637  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:39.914230  353396 type.go:168] "Request Body" body=""
	I1213 10:35:39.914312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:39.914640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:39.914716  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:40.414236  353396 type.go:168] "Request Body" body=""
	I1213 10:35:40.414320  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:40.414647  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:40.914170  353396 type.go:168] "Request Body" body=""
	I1213 10:35:40.914247  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:40.914571  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:41.414273  353396 type.go:168] "Request Body" body=""
	I1213 10:35:41.414349  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:41.414716  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:41.914438  353396 type.go:168] "Request Body" body=""
	I1213 10:35:41.914515  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:41.914837  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:41.914886  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:42.414105  353396 type.go:168] "Request Body" body=""
	I1213 10:35:42.414188  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:42.414457  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:42.914545  353396 type.go:168] "Request Body" body=""
	I1213 10:35:42.914625  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:42.914994  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:43.414794  353396 type.go:168] "Request Body" body=""
	I1213 10:35:43.414871  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:43.415204  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:43.914954  353396 type.go:168] "Request Body" body=""
	I1213 10:35:43.915028  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:43.915294  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:43.915335  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:44.415170  353396 type.go:168] "Request Body" body=""
	I1213 10:35:44.415252  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:44.415625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:44.914230  353396 type.go:168] "Request Body" body=""
	I1213 10:35:44.914311  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:44.914638  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:45.414189  353396 type.go:168] "Request Body" body=""
	I1213 10:35:45.414273  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:45.414545  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:45.914206  353396 type.go:168] "Request Body" body=""
	I1213 10:35:45.914285  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:45.914623  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:46.414264  353396 type.go:168] "Request Body" body=""
	I1213 10:35:46.414341  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:46.414706  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:46.414761  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:46.914394  353396 type.go:168] "Request Body" body=""
	I1213 10:35:46.914496  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:46.914842  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:47.414246  353396 type.go:168] "Request Body" body=""
	I1213 10:35:47.414321  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:47.414636  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:47.914823  353396 type.go:168] "Request Body" body=""
	I1213 10:35:47.914900  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:47.915205  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:48.414980  353396 type.go:168] "Request Body" body=""
	I1213 10:35:48.415049  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:48.415356  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:48.415416  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:48.915139  353396 type.go:168] "Request Body" body=""
	I1213 10:35:48.915222  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:48.915541  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:49.414295  353396 type.go:168] "Request Body" body=""
	I1213 10:35:49.414372  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:49.414675  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:49.914178  353396 type.go:168] "Request Body" body=""
	I1213 10:35:49.914246  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:49.914565  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:50.414236  353396 type.go:168] "Request Body" body=""
	I1213 10:35:50.414322  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:50.414646  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:50.914236  353396 type.go:168] "Request Body" body=""
	I1213 10:35:50.914312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:50.914633  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:50.914705  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:51.414174  353396 type.go:168] "Request Body" body=""
	I1213 10:35:51.414251  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:51.414515  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:51.914227  353396 type.go:168] "Request Body" body=""
	I1213 10:35:51.914303  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:51.914621  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:52.414226  353396 type.go:168] "Request Body" body=""
	I1213 10:35:52.414312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:52.414660  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:52.914528  353396 type.go:168] "Request Body" body=""
	I1213 10:35:52.914597  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:52.914892  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:52.914936  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:53.414626  353396 type.go:168] "Request Body" body=""
	I1213 10:35:53.414743  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:53.415155  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:53.914985  353396 type.go:168] "Request Body" body=""
	I1213 10:35:53.915060  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:53.915423  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:54.414132  353396 type.go:168] "Request Body" body=""
	I1213 10:35:54.414212  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:54.414538  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:54.914221  353396 type.go:168] "Request Body" body=""
	I1213 10:35:54.914300  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:54.914639  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:55.414361  353396 type.go:168] "Request Body" body=""
	I1213 10:35:55.414442  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:55.414760  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:55.414814  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:55.914153  353396 type.go:168] "Request Body" body=""
	I1213 10:35:55.914231  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:55.914493  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:56.414257  353396 type.go:168] "Request Body" body=""
	I1213 10:35:56.414339  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:56.414657  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:56.914216  353396 type.go:168] "Request Body" body=""
	I1213 10:35:56.914293  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:56.914667  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:57.414176  353396 type.go:168] "Request Body" body=""
	I1213 10:35:57.414254  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:57.414584  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:57.914966  353396 type.go:168] "Request Body" body=""
	I1213 10:35:57.915050  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:57.915391  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:57.915453  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:58.414132  353396 type.go:168] "Request Body" body=""
	I1213 10:35:58.414215  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:58.414528  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:58.914158  353396 type.go:168] "Request Body" body=""
	I1213 10:35:58.914236  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:58.914510  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:59.414124  353396 type.go:168] "Request Body" body=""
	I1213 10:35:59.414208  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:59.414536  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:59.914263  353396 type.go:168] "Request Body" body=""
	I1213 10:35:59.914349  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:59.914758  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:00.421144  353396 type.go:168] "Request Body" body=""
	I1213 10:36:00.421250  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:00.421612  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:00.421665  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:00.914230  353396 type.go:168] "Request Body" body=""
	I1213 10:36:00.914305  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:00.914644  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:01.414215  353396 type.go:168] "Request Body" body=""
	I1213 10:36:01.414292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:01.414622  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:01.914179  353396 type.go:168] "Request Body" body=""
	I1213 10:36:01.914256  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:01.914522  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:02.414207  353396 type.go:168] "Request Body" body=""
	I1213 10:36:02.414283  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:02.414571  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:02.914503  353396 type.go:168] "Request Body" body=""
	I1213 10:36:02.914581  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:02.914941  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:02.915005  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:03.414758  353396 type.go:168] "Request Body" body=""
	I1213 10:36:03.414829  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:03.415178  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:03.914982  353396 type.go:168] "Request Body" body=""
	I1213 10:36:03.915057  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:03.915402  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:04.415064  353396 type.go:168] "Request Body" body=""
	I1213 10:36:04.415144  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:04.415523  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:04.914219  353396 type.go:168] "Request Body" body=""
	I1213 10:36:04.914298  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:04.914617  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:05.414231  353396 type.go:168] "Request Body" body=""
	I1213 10:36:05.414310  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:05.414671  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:05.414749  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:05.914422  353396 type.go:168] "Request Body" body=""
	I1213 10:36:05.914498  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:05.914864  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:06.414177  353396 type.go:168] "Request Body" body=""
	I1213 10:36:06.414262  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:06.414578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:06.914278  353396 type.go:168] "Request Body" body=""
	I1213 10:36:06.914363  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:06.914742  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:07.414300  353396 type.go:168] "Request Body" body=""
	I1213 10:36:07.414382  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:07.414720  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:07.414787  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:07.914791  353396 type.go:168] "Request Body" body=""
	I1213 10:36:07.914860  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:07.915123  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:08.414897  353396 type.go:168] "Request Body" body=""
	I1213 10:36:08.414981  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:08.415336  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:08.915032  353396 type.go:168] "Request Body" body=""
	I1213 10:36:08.915117  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:08.915466  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:09.414191  353396 type.go:168] "Request Body" body=""
	I1213 10:36:09.414260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:09.414540  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:09.914271  353396 type.go:168] "Request Body" body=""
	I1213 10:36:09.914352  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:09.914675  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:09.914752  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:10.414138  353396 type.go:168] "Request Body" body=""
	I1213 10:36:10.414216  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:10.414557  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:10.914195  353396 type.go:168] "Request Body" body=""
	I1213 10:36:10.914266  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:10.914534  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:11.414263  353396 type.go:168] "Request Body" body=""
	I1213 10:36:11.414339  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:11.414753  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:11.914459  353396 type.go:168] "Request Body" body=""
	I1213 10:36:11.914533  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:11.914890  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:11.914948  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:12.414139  353396 type.go:168] "Request Body" body=""
	I1213 10:36:12.414211  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:12.414474  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:12.914342  353396 type.go:168] "Request Body" body=""
	I1213 10:36:12.914427  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:12.914750  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:13.414215  353396 type.go:168] "Request Body" body=""
	I1213 10:36:13.414295  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:13.414650  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:13.914372  353396 type.go:168] "Request Body" body=""
	I1213 10:36:13.914451  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:13.914752  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:14.414251  353396 type.go:168] "Request Body" body=""
	I1213 10:36:14.414328  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:14.414656  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:14.414721  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:14.914256  353396 type.go:168] "Request Body" body=""
	I1213 10:36:14.914328  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:14.914611  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:15.415149  353396 type.go:168] "Request Body" body=""
	I1213 10:36:15.415221  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:15.415540  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:15.914232  353396 type.go:168] "Request Body" body=""
	I1213 10:36:15.914308  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:15.914678  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:16.414245  353396 type.go:168] "Request Body" body=""
	I1213 10:36:16.414325  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:16.414657  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:16.914285  353396 type.go:168] "Request Body" body=""
	I1213 10:36:16.914367  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:16.914649  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:16.914725  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:17.414244  353396 type.go:168] "Request Body" body=""
	I1213 10:36:17.414333  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:17.414644  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:17.914739  353396 type.go:168] "Request Body" body=""
	I1213 10:36:17.914821  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:17.915139  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:18.414875  353396 type.go:168] "Request Body" body=""
	I1213 10:36:18.414955  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:18.415226  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:18.915006  353396 type.go:168] "Request Body" body=""
	I1213 10:36:18.915082  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:18.915415  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:18.915472  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:19.415096  353396 type.go:168] "Request Body" body=""
	I1213 10:36:19.415183  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:19.415488  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:19.914201  353396 type.go:168] "Request Body" body=""
	I1213 10:36:19.914273  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:19.914619  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:20.414338  353396 type.go:168] "Request Body" body=""
	I1213 10:36:20.414409  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:20.414746  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:20.914260  353396 type.go:168] "Request Body" body=""
	I1213 10:36:20.914335  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:20.914704  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:21.414252  353396 type.go:168] "Request Body" body=""
	I1213 10:36:21.414338  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:21.414656  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:21.414724  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:21.914251  353396 type.go:168] "Request Body" body=""
	I1213 10:36:21.914328  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:21.914668  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:22.414268  353396 type.go:168] "Request Body" body=""
	I1213 10:36:22.414350  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:22.414680  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:22.914474  353396 type.go:168] "Request Body" body=""
	I1213 10:36:22.914553  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:22.914836  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:23.414235  353396 type.go:168] "Request Body" body=""
	I1213 10:36:23.414326  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:23.414670  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:23.414743  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:23.914266  353396 type.go:168] "Request Body" body=""
	I1213 10:36:23.914367  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:23.914763  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:24.414152  353396 type.go:168] "Request Body" body=""
	I1213 10:36:24.414223  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:24.414481  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:24.914197  353396 type.go:168] "Request Body" body=""
	I1213 10:36:24.914304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:24.914663  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:25.414263  353396 type.go:168] "Request Body" body=""
	I1213 10:36:25.414339  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:25.414676  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:25.914948  353396 type.go:168] "Request Body" body=""
	I1213 10:36:25.915020  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:25.915277  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:25.915318  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:26.415116  353396 type.go:168] "Request Body" body=""
	I1213 10:36:26.415208  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:26.415550  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:26.914250  353396 type.go:168] "Request Body" body=""
	I1213 10:36:26.914329  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:26.914612  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:27.414291  353396 type.go:168] "Request Body" body=""
	I1213 10:36:27.414364  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:27.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:27.914739  353396 type.go:168] "Request Body" body=""
	I1213 10:36:27.914816  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:27.915095  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:28.414897  353396 type.go:168] "Request Body" body=""
	I1213 10:36:28.414982  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:28.415303  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:28.415358  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:28.915084  353396 type.go:168] "Request Body" body=""
	I1213 10:36:28.915156  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:28.915451  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:29.414204  353396 type.go:168] "Request Body" body=""
	I1213 10:36:29.414283  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:29.414602  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:29.914216  353396 type.go:168] "Request Body" body=""
	I1213 10:36:29.914292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:29.914661  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:30.414927  353396 type.go:168] "Request Body" body=""
	I1213 10:36:30.415000  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:30.415303  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:30.915117  353396 type.go:168] "Request Body" body=""
	I1213 10:36:30.915200  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:30.915511  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:30.915566  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:31.414255  353396 type.go:168] "Request Body" body=""
	I1213 10:36:31.414349  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:31.414739  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:31.914164  353396 type.go:168] "Request Body" body=""
	I1213 10:36:31.914237  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:31.914519  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:32.414229  353396 type.go:168] "Request Body" body=""
	I1213 10:36:32.414304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:32.414647  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:32.914523  353396 type.go:168] "Request Body" body=""
	I1213 10:36:32.914604  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:32.914915  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:33.414159  353396 type.go:168] "Request Body" body=""
	I1213 10:36:33.414232  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:33.414567  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:33.414632  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:33.914300  353396 type.go:168] "Request Body" body=""
	I1213 10:36:33.914382  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:33.914670  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:34.414374  353396 type.go:168] "Request Body" body=""
	I1213 10:36:34.414451  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:34.414727  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:34.914184  353396 type.go:168] "Request Body" body=""
	I1213 10:36:34.914264  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:34.914587  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:35.414286  353396 type.go:168] "Request Body" body=""
	I1213 10:36:35.414359  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:35.414670  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:35.414741  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:35.914405  353396 type.go:168] "Request Body" body=""
	I1213 10:36:35.914489  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:35.914832  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:36.415085  353396 type.go:168] "Request Body" body=""
	I1213 10:36:36.415160  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:36.415449  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:36.914164  353396 type.go:168] "Request Body" body=""
	I1213 10:36:36.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:36.914585  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:37.414308  353396 type.go:168] "Request Body" body=""
	I1213 10:36:37.414384  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:37.414780  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:37.414840  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:37.914758  353396 type.go:168] "Request Body" body=""
	I1213 10:36:37.914831  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:37.915157  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:38.414970  353396 type.go:168] "Request Body" body=""
	I1213 10:36:38.415052  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:38.415405  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:38.915122  353396 type.go:168] "Request Body" body=""
	I1213 10:36:38.915210  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:38.915558  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:39.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:36:39.414237  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:39.414542  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:39.914247  353396 type.go:168] "Request Body" body=""
	I1213 10:36:39.914324  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:39.914669  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:39.914747  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:40.414415  353396 type.go:168] "Request Body" body=""
	I1213 10:36:40.414494  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:40.414850  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:40.915098  353396 type.go:168] "Request Body" body=""
	I1213 10:36:40.915172  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:40.915425  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:41.414124  353396 type.go:168] "Request Body" body=""
	I1213 10:36:41.414207  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:41.414558  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:41.914180  353396 type.go:168] "Request Body" body=""
	I1213 10:36:41.914266  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:41.914604  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:42.415138  353396 type.go:168] "Request Body" body=""
	I1213 10:36:42.415216  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:42.415488  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:42.415535  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:42.914549  353396 type.go:168] "Request Body" body=""
	I1213 10:36:42.914622  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:42.914929  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:43.414232  353396 type.go:168] "Request Body" body=""
	I1213 10:36:43.414317  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:43.414680  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:43.914384  353396 type.go:168] "Request Body" body=""
	I1213 10:36:43.914452  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:43.914730  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:44.414224  353396 type.go:168] "Request Body" body=""
	I1213 10:36:44.414302  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:44.414657  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:44.914395  353396 type.go:168] "Request Body" body=""
	I1213 10:36:44.914480  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:44.914836  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:44.914896  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:45.414191  353396 type.go:168] "Request Body" body=""
	I1213 10:36:45.414264  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:45.414567  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:45.914170  353396 type.go:168] "Request Body" body=""
	I1213 10:36:45.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:45.914607  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:46.414263  353396 type.go:168] "Request Body" body=""
	I1213 10:36:46.414343  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:46.414668  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:46.914163  353396 type.go:168] "Request Body" body=""
	I1213 10:36:46.914242  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:46.914578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:47.414274  353396 type.go:168] "Request Body" body=""
	I1213 10:36:47.414359  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:47.414709  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:47.414762  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:47.914884  353396 type.go:168] "Request Body" body=""
	I1213 10:36:47.914961  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:47.915333  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:48.415033  353396 type.go:168] "Request Body" body=""
	I1213 10:36:48.415102  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:48.415408  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:48.914142  353396 type.go:168] "Request Body" body=""
	I1213 10:36:48.914217  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:48.914551  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:49.414248  353396 type.go:168] "Request Body" body=""
	I1213 10:36:49.414332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:49.414653  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:49.914171  353396 type.go:168] "Request Body" body=""
	I1213 10:36:49.914239  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:49.914490  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:49.914533  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:50.414250  353396 type.go:168] "Request Body" body=""
	I1213 10:36:50.414332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:50.414655  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:50.914247  353396 type.go:168] "Request Body" body=""
	I1213 10:36:50.914325  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:50.914719  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:51.415136  353396 type.go:168] "Request Body" body=""
	I1213 10:36:51.415212  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:51.415495  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:51.914194  353396 type.go:168] "Request Body" body=""
	I1213 10:36:51.914271  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:51.914606  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:51.914663  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:52.414196  353396 type.go:168] "Request Body" body=""
	I1213 10:36:52.414278  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:52.414628  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:52.914521  353396 type.go:168] "Request Body" body=""
	I1213 10:36:52.914591  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:52.914917  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:53.414620  353396 type.go:168] "Request Body" body=""
	I1213 10:36:53.414716  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:53.415008  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:53.914831  353396 type.go:168] "Request Body" body=""
	I1213 10:36:53.914908  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:53.915259  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:53.915316  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:54.415073  353396 type.go:168] "Request Body" body=""
	I1213 10:36:54.415143  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:54.415457  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:54.914176  353396 type.go:168] "Request Body" body=""
	I1213 10:36:54.914260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:54.914603  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:55.414307  353396 type.go:168] "Request Body" body=""
	I1213 10:36:55.414386  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:55.414744  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:55.914154  353396 type.go:168] "Request Body" body=""
	I1213 10:36:55.914224  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:55.914531  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:56.414237  353396 type.go:168] "Request Body" body=""
	I1213 10:36:56.414331  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:56.414644  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:56.414728  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:56.914208  353396 type.go:168] "Request Body" body=""
	I1213 10:36:56.914282  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:56.914593  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:57.414164  353396 type.go:168] "Request Body" body=""
	I1213 10:36:57.414233  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:57.414586  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:57.914740  353396 type.go:168] "Request Body" body=""
	I1213 10:36:57.914819  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:57.915172  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:58.414966  353396 type.go:168] "Request Body" body=""
	I1213 10:36:58.415044  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:58.415365  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:58.415427  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:58.914107  353396 type.go:168] "Request Body" body=""
	I1213 10:36:58.914182  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:58.914459  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:59.414161  353396 type.go:168] "Request Body" body=""
	I1213 10:36:59.414247  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:59.414593  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:59.914255  353396 type.go:168] "Request Body" body=""
	I1213 10:36:59.914339  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:59.914625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:00.414213  353396 type.go:168] "Request Body" body=""
	I1213 10:37:00.414303  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:00.414598  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:00.914236  353396 type.go:168] "Request Body" body=""
	I1213 10:37:00.914308  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:00.914641  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:37:00.914708  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:37:01.414238  353396 type.go:168] "Request Body" body=""
	I1213 10:37:01.414328  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:01.414724  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:01.914174  353396 type.go:168] "Request Body" body=""
	I1213 10:37:01.914261  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:01.914555  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:02.414290  353396 type.go:168] "Request Body" body=""
	I1213 10:37:02.414375  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:02.414752  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:02.914840  353396 type.go:168] "Request Body" body=""
	I1213 10:37:02.914918  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:02.915213  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:37:02.915263  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:37:03.415012  353396 type.go:168] "Request Body" body=""
	I1213 10:37:03.415090  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:03.415417  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:03.914198  353396 type.go:168] "Request Body" body=""
	I1213 10:37:03.914276  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:03.914604  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:04.414267  353396 type.go:168] "Request Body" body=""
	I1213 10:37:04.414350  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:04.414724  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:04.914155  353396 type.go:168] "Request Body" body=""
	I1213 10:37:04.914256  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:04.914593  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:05.414221  353396 type.go:168] "Request Body" body=""
	I1213 10:37:05.414327  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:05.414680  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:37:05.414769  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:37:05.914428  353396 type.go:168] "Request Body" body=""
	I1213 10:37:05.914509  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:05.914816  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:06.414144  353396 type.go:168] "Request Body" body=""
	I1213 10:37:06.414222  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:06.414490  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:06.914508  353396 type.go:168] "Request Body" body=""
	I1213 10:37:06.914592  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:06.914976  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:07.414224  353396 type.go:168] "Request Body" body=""
	I1213 10:37:07.414306  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:07.414615  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:07.914710  353396 type.go:168] "Request Body" body=""
	I1213 10:37:07.914821  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:07.915135  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:37:07.915217  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:37:08.414751  353396 node_ready.go:38] duration metric: took 6m0.000751586s for node "functional-652709" to be "Ready" ...
	I1213 10:37:08.417881  353396 out.go:203] 
	W1213 10:37:08.420786  353396 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 10:37:08.420808  353396 out.go:285] * 
	* 
	W1213 10:37:08.422957  353396 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:37:08.425703  353396 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-652709 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m6.312958666s for "functional-652709" cluster.
I1213 10:37:08.934511  308915 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-652709
helpers_test.go:244: (dbg) docker inspect functional-652709:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f",
	        "Created": "2025-12-13T10:22:44.366993781Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 347931,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:22:44.437030763Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/hosts",
	        "LogPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f-json.log",
	        "Name": "/functional-652709",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-652709:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-652709",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f",
	                "LowerDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095-init/diff:/var/lib/docker/overlay2/5d0ca86320f2c06765995d644a73af30cd6ddb36f5c1a2a6b1ebb24af53cdabd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/merged",
	                "UpperDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/diff",
	                "WorkDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-652709",
	                "Source": "/var/lib/docker/volumes/functional-652709/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-652709",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-652709",
	                "name.minikube.sigs.k8s.io": "functional-652709",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "52e527b5bd789a02eb7efb651200033ed4929e5fc7545e9df042d3f777cc9782",
	            "SandboxKey": "/var/run/docker/netns/52e527b5bd78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-652709": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:23:08:9e:cb:13",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "344f2b940117dadb28d1ef1328f911c0446307288fdfafebfe59f38e473f79cb",
	                    "EndpointID": "8954f96e5987202be5715e7023384fe862744778b2520bccba28c57814f0980f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-652709",
	                        "0f6101071ca2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-652709 -n functional-652709
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-652709 -n functional-652709: exit status 2 (384.142379ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-652709 logs -n 25: (1.044927529s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop           │ -p addons-672850                                                                                                                                        │ addons-672850     │ jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:18 UTC │
	│ addons         │ enable dashboard -p addons-672850                                                                                                                       │ addons-672850     │ jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │ 13 Dec 25 10:18 UTC │
	│ addons         │ disable dashboard -p addons-672850                                                                                                                      │ addons-672850     │ jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │ 13 Dec 25 10:18 UTC │
	│ addons         │ disable gvisor -p addons-672850                                                                                                                         │ addons-672850     │ jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │ 13 Dec 25 10:18 UTC │
	│ delete         │ -p addons-672850                                                                                                                                        │ addons-672850     │ jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │ 13 Dec 25 10:18 UTC │
	│ start          │ -p dockerenv-403574 --driver=docker  --container-runtime=containerd                                                                                     │ dockerenv-403574  │ jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │ 13 Dec 25 10:18 UTC │
	│ docker-env     │ --ssh-host --ssh-add -p dockerenv-403574                                                                                                                │ dockerenv-403574  │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ delete         │ -p dockerenv-403574                                                                                                                                     │ dockerenv-403574  │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ start          │ -p nospam-462625 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-462625 --driver=docker  --container-runtime=containerd                           │ nospam-462625     │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ start          │ nospam-462625 --log_dir /tmp/nospam-462625 start --dry-run                                                                                              │ nospam-462625     │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │                     │
	│ start          │ nospam-462625 --log_dir /tmp/nospam-462625 start --dry-run                                                                                              │ nospam-462625     │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │                     │
	│ start          │ nospam-462625 --log_dir /tmp/nospam-462625 start --dry-run                                                                                              │ nospam-462625     │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │                     │
	│ pause          │ nospam-462625 --log_dir /tmp/nospam-462625 pause                                                                                                        │ nospam-462625     │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ pause          │ nospam-462625 --log_dir /tmp/nospam-462625 pause                                                                                                        │ nospam-462625     │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ update-context │ functional-319494 update-context --alsologtostderr -v=2                                                                                                 │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image          │ functional-319494 image ls --format short --alsologtostderr                                                                                             │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image          │ functional-319494 image ls --format yaml --alsologtostderr                                                                                              │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ ssh            │ functional-319494 ssh pgrep buildkitd                                                                                                                   │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │                     │
	│ image          │ functional-319494 image ls --format json --alsologtostderr                                                                                              │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image          │ functional-319494 image build -t localhost/my-image:functional-319494 testdata/build --alsologtostderr                                                  │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image          │ functional-319494 image ls --format table --alsologtostderr                                                                                             │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image          │ functional-319494 image ls                                                                                                                              │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ delete         │ -p functional-319494                                                                                                                                    │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ start          │ -p functional-652709 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │                     │
	│ start          │ -p functional-652709 --alsologtostderr -v=8                                                                                                             │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:31 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:31:02
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:31:02.672113  353396 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:31:02.672249  353396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:31:02.672258  353396 out.go:374] Setting ErrFile to fd 2...
	I1213 10:31:02.672263  353396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:31:02.672511  353396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 10:31:02.672909  353396 out.go:368] Setting JSON to false
	I1213 10:31:02.673776  353396 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11616,"bootTime":1765610247,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 10:31:02.673896  353396 start.go:143] virtualization:  
	I1213 10:31:02.677410  353396 out.go:179] * [functional-652709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:31:02.681384  353396 notify.go:221] Checking for updates...
	I1213 10:31:02.681459  353396 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:31:02.684444  353396 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:31:02.687336  353396 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:31:02.690317  353396 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 10:31:02.693212  353396 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:31:02.696019  353396 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:31:02.699466  353396 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:31:02.699577  353396 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:31:02.725188  353396 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:31:02.725318  353396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:31:02.796082  353396 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:31:02.785556605 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:31:02.796187  353396 docker.go:319] overlay module found
	I1213 10:31:02.799378  353396 out.go:179] * Using the docker driver based on existing profile
	I1213 10:31:02.802341  353396 start.go:309] selected driver: docker
	I1213 10:31:02.802370  353396 start.go:927] validating driver "docker" against &{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:31:02.802524  353396 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:31:02.802652  353396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:31:02.859333  353396 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:31:02.849982894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:31:02.859762  353396 cni.go:84] Creating CNI manager for ""
	I1213 10:31:02.859824  353396 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:31:02.859884  353396 start.go:353] cluster config:
	{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:31:02.863117  353396 out.go:179] * Starting "functional-652709" primary control-plane node in "functional-652709" cluster
	I1213 10:31:02.865981  353396 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 10:31:02.868957  353396 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:31:02.871941  353396 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:31:02.871997  353396 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 10:31:02.872008  353396 cache.go:65] Caching tarball of preloaded images
	I1213 10:31:02.872055  353396 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:31:02.872104  353396 preload.go:238] Found /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 10:31:02.872129  353396 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 10:31:02.872236  353396 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/config.json ...
	I1213 10:31:02.890218  353396 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:31:02.890243  353396 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:31:02.890259  353396 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:31:02.890291  353396 start.go:360] acquireMachinesLock for functional-652709: {Name:mk6e8c40fbbb5af0bb2468340fd710875030300d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:31:02.890351  353396 start.go:364] duration metric: took 34.691µs to acquireMachinesLock for "functional-652709"
	I1213 10:31:02.890374  353396 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:31:02.890380  353396 fix.go:54] fixHost starting: 
	I1213 10:31:02.890658  353396 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:31:02.911217  353396 fix.go:112] recreateIfNeeded on functional-652709: state=Running err=<nil>
	W1213 10:31:02.911248  353396 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:31:02.914505  353396 out.go:252] * Updating the running docker "functional-652709" container ...
	I1213 10:31:02.914550  353396 machine.go:94] provisionDockerMachine start ...
	I1213 10:31:02.914653  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:02.937238  353396 main.go:143] libmachine: Using SSH client type: native
	I1213 10:31:02.937582  353396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:31:02.937592  353396 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:31:03.091334  353396 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-652709
	
	I1213 10:31:03.091359  353396 ubuntu.go:182] provisioning hostname "functional-652709"
	I1213 10:31:03.091424  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:03.110422  353396 main.go:143] libmachine: Using SSH client type: native
	I1213 10:31:03.110837  353396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:31:03.110855  353396 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-652709 && echo "functional-652709" | sudo tee /etc/hostname
	I1213 10:31:03.277113  353396 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-652709
	
	I1213 10:31:03.277196  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:03.294664  353396 main.go:143] libmachine: Using SSH client type: native
	I1213 10:31:03.295057  353396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:31:03.295079  353396 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-652709' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-652709/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-652709' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:31:03.447182  353396 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:31:03.447207  353396 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 10:31:03.447240  353396 ubuntu.go:190] setting up certificates
	I1213 10:31:03.447256  353396 provision.go:84] configureAuth start
	I1213 10:31:03.447330  353396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-652709
	I1213 10:31:03.465044  353396 provision.go:143] copyHostCerts
	I1213 10:31:03.465100  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 10:31:03.465141  353396 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 10:31:03.465148  353396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 10:31:03.465220  353396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 10:31:03.465329  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 10:31:03.465349  353396 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 10:31:03.465353  353396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 10:31:03.465383  353396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 10:31:03.465436  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 10:31:03.465453  353396 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 10:31:03.465457  353396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 10:31:03.465486  353396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 10:31:03.465541  353396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.functional-652709 san=[127.0.0.1 192.168.49.2 functional-652709 localhost minikube]
	I1213 10:31:03.927648  353396 provision.go:177] copyRemoteCerts
	I1213 10:31:03.927724  353396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:31:03.927763  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:03.947692  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:04.064623  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 10:31:04.064688  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:31:04.082355  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 10:31:04.082418  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:31:04.100866  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 10:31:04.100930  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:31:04.121259  353396 provision.go:87] duration metric: took 673.978127ms to configureAuth
	I1213 10:31:04.121312  353396 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:31:04.121495  353396 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:31:04.121509  353396 machine.go:97] duration metric: took 1.206951102s to provisionDockerMachine
	I1213 10:31:04.121518  353396 start.go:293] postStartSetup for "functional-652709" (driver="docker")
	I1213 10:31:04.121529  353396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:31:04.121586  353396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:31:04.121633  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:04.139400  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:04.246752  353396 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:31:04.250273  353396 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 10:31:04.250297  353396 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 10:31:04.250302  353396 command_runner.go:130] > VERSION_ID="12"
	I1213 10:31:04.250307  353396 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 10:31:04.250312  353396 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 10:31:04.250316  353396 command_runner.go:130] > ID=debian
	I1213 10:31:04.250320  353396 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 10:31:04.250325  353396 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 10:31:04.250331  353396 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 10:31:04.250368  353396 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:31:04.250390  353396 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:31:04.250401  353396 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 10:31:04.250463  353396 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 10:31:04.250545  353396 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 10:31:04.250556  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> /etc/ssl/certs/3089152.pem
	I1213 10:31:04.250633  353396 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts -> hosts in /etc/test/nested/copy/308915
	I1213 10:31:04.250715  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts -> /etc/test/nested/copy/308915/hosts
	I1213 10:31:04.250766  353396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/308915
	I1213 10:31:04.258199  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 10:31:04.275892  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts --> /etc/test/nested/copy/308915/hosts (40 bytes)
	I1213 10:31:04.293256  353396 start.go:296] duration metric: took 171.721845ms for postStartSetup
	I1213 10:31:04.293373  353396 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:31:04.293418  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:04.310428  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:04.412061  353396 command_runner.go:130] > 11%
	I1213 10:31:04.412134  353396 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:31:04.417606  353396 command_runner.go:130] > 174G
	I1213 10:31:04.418241  353396 fix.go:56] duration metric: took 1.527856492s for fixHost
	I1213 10:31:04.418260  353396 start.go:83] releasing machines lock for "functional-652709", held for 1.527895524s
	I1213 10:31:04.418328  353396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-652709
	I1213 10:31:04.443217  353396 ssh_runner.go:195] Run: cat /version.json
	I1213 10:31:04.443268  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:04.443564  353396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:31:04.443617  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:04.481371  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:04.481516  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:04.669844  353396 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 10:31:04.669910  353396 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 10:31:04.670045  353396 ssh_runner.go:195] Run: systemctl --version
	I1213 10:31:04.676239  353396 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 10:31:04.676276  353396 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 10:31:04.676350  353396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 10:31:04.680689  353396 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 10:31:04.680854  353396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:31:04.680918  353396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:31:04.688793  353396 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:31:04.688818  353396 start.go:496] detecting cgroup driver to use...
	I1213 10:31:04.688851  353396 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:31:04.688909  353396 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 10:31:04.704425  353396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:31:04.717662  353396 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:31:04.717728  353396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:31:04.733551  353396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:31:04.746955  353396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:31:04.865557  353396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:31:04.977869  353396 docker.go:234] disabling docker service ...
	I1213 10:31:04.977950  353396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:31:04.992461  353396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:31:05.013428  353396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:31:05.135601  353396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:31:05.282715  353396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:31:05.296047  353396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:31:05.308957  353396 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1213 10:31:05.310188  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:31:05.319385  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:31:05.328561  353396 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:31:05.328627  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:31:05.337573  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:31:05.346847  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:31:05.355976  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:31:05.364985  353396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:31:05.373424  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:31:05.382892  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:31:05.391826  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:31:05.401136  353396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:31:05.407987  353396 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 10:31:05.408928  353396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:31:05.416444  353396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:31:05.526748  353396 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:31:05.655433  353396 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 10:31:05.655515  353396 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 10:31:05.659353  353396 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1213 10:31:05.659378  353396 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 10:31:05.659389  353396 command_runner.go:130] > Device: 0,72	Inode: 1622        Links: 1
	I1213 10:31:05.659396  353396 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:31:05.659402  353396 command_runner.go:130] > Access: 2025-12-13 10:31:05.610211940 +0000
	I1213 10:31:05.659407  353396 command_runner.go:130] > Modify: 2025-12-13 10:31:05.610211940 +0000
	I1213 10:31:05.659412  353396 command_runner.go:130] > Change: 2025-12-13 10:31:05.610211940 +0000
	I1213 10:31:05.659416  353396 command_runner.go:130] >  Birth: -
	I1213 10:31:05.660005  353396 start.go:564] Will wait 60s for crictl version
	I1213 10:31:05.660063  353396 ssh_runner.go:195] Run: which crictl
	I1213 10:31:05.663492  353396 command_runner.go:130] > /usr/local/bin/crictl
	I1213 10:31:05.663579  353396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:31:05.685881  353396 command_runner.go:130] > Version:  0.1.0
	I1213 10:31:05.685946  353396 command_runner.go:130] > RuntimeName:  containerd
	I1213 10:31:05.686097  353396 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1213 10:31:05.686253  353396 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 10:31:05.688463  353396 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 10:31:05.688528  353396 ssh_runner.go:195] Run: containerd --version
	I1213 10:31:05.706883  353396 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 10:31:05.709639  353396 ssh_runner.go:195] Run: containerd --version
	I1213 10:31:05.727187  353396 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 10:31:05.735610  353396 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 10:31:05.738579  353396 cli_runner.go:164] Run: docker network inspect functional-652709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:31:05.753316  353396 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:31:05.757039  353396 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 10:31:05.757213  353396 kubeadm.go:884] updating cluster {Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:31:05.757336  353396 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:31:05.757417  353396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:31:05.778952  353396 command_runner.go:130] > {
	I1213 10:31:05.778976  353396 command_runner.go:130] >   "images":  [
	I1213 10:31:05.778980  353396 command_runner.go:130] >     {
	I1213 10:31:05.778990  353396 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:31:05.778995  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779001  353396 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:31:05.779005  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779009  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779018  353396 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 10:31:05.779024  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779028  353396 command_runner.go:130] >       "size":  "40636774",
	I1213 10:31:05.779032  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779041  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779045  353396 command_runner.go:130] >     },
	I1213 10:31:05.779053  353396 command_runner.go:130] >     {
	I1213 10:31:05.779066  353396 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:31:05.779074  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779080  353396 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:31:05.779087  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779091  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779102  353396 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:31:05.779106  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779110  353396 command_runner.go:130] >       "size":  "8034419",
	I1213 10:31:05.779116  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779120  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779128  353396 command_runner.go:130] >     },
	I1213 10:31:05.779131  353396 command_runner.go:130] >     {
	I1213 10:31:05.779138  353396 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:31:05.779145  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779150  353396 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:31:05.779157  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779163  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779175  353396 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 10:31:05.779181  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779185  353396 command_runner.go:130] >       "size":  "21168808",
	I1213 10:31:05.779190  353396 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:31:05.779195  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779199  353396 command_runner.go:130] >     },
	I1213 10:31:05.779204  353396 command_runner.go:130] >     {
	I1213 10:31:05.779211  353396 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:31:05.779218  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779224  353396 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:31:05.779231  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779235  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779246  353396 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 10:31:05.779252  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779257  353396 command_runner.go:130] >       "size":  "21136588",
	I1213 10:31:05.779267  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.779275  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.779279  353396 command_runner.go:130] >       },
	I1213 10:31:05.779283  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779290  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779299  353396 command_runner.go:130] >     },
	I1213 10:31:05.779303  353396 command_runner.go:130] >     {
	I1213 10:31:05.779314  353396 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:31:05.779321  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779327  353396 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:31:05.779334  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779338  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779350  353396 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 10:31:05.779357  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779361  353396 command_runner.go:130] >       "size":  "24678359",
	I1213 10:31:05.779365  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.779375  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.779384  353396 command_runner.go:130] >       },
	I1213 10:31:05.779388  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779396  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779400  353396 command_runner.go:130] >     },
	I1213 10:31:05.779407  353396 command_runner.go:130] >     {
	I1213 10:31:05.779414  353396 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:31:05.779421  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779428  353396 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:31:05.779435  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779439  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779450  353396 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 10:31:05.779454  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779461  353396 command_runner.go:130] >       "size":  "20661043",
	I1213 10:31:05.779465  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.779473  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.779477  353396 command_runner.go:130] >       },
	I1213 10:31:05.779489  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779497  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779501  353396 command_runner.go:130] >     },
	I1213 10:31:05.779507  353396 command_runner.go:130] >     {
	I1213 10:31:05.779515  353396 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:31:05.779522  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779527  353396 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:31:05.779534  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779538  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779546  353396 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:31:05.779553  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779557  353396 command_runner.go:130] >       "size":  "22429671",
	I1213 10:31:05.779561  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779567  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779571  353396 command_runner.go:130] >     },
	I1213 10:31:05.779578  353396 command_runner.go:130] >     {
	I1213 10:31:05.779586  353396 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:31:05.779593  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779600  353396 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:31:05.779606  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779610  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779622  353396 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 10:31:05.779628  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779633  353396 command_runner.go:130] >       "size":  "15391364",
	I1213 10:31:05.779641  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.779645  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.779648  353396 command_runner.go:130] >       },
	I1213 10:31:05.779654  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779658  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779666  353396 command_runner.go:130] >     },
	I1213 10:31:05.779669  353396 command_runner.go:130] >     {
	I1213 10:31:05.779681  353396 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:31:05.779688  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779698  353396 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:31:05.779704  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779709  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779720  353396 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 10:31:05.779726  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779730  353396 command_runner.go:130] >       "size":  "267939",
	I1213 10:31:05.779735  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.779741  353396 command_runner.go:130] >         "value":  "65535"
	I1213 10:31:05.779744  353396 command_runner.go:130] >       },
	I1213 10:31:05.779753  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779758  353396 command_runner.go:130] >       "pinned":  true
	I1213 10:31:05.779764  353396 command_runner.go:130] >     }
	I1213 10:31:05.779767  353396 command_runner.go:130] >   ]
	I1213 10:31:05.779770  353396 command_runner.go:130] > }
	I1213 10:31:05.781791  353396 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:31:05.781813  353396 containerd.go:534] Images already preloaded, skipping extraction
	I1213 10:31:05.781881  353396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:31:05.805396  353396 command_runner.go:130] > {
	I1213 10:31:05.805420  353396 command_runner.go:130] >   "images":  [
	I1213 10:31:05.805426  353396 command_runner.go:130] >     {
	I1213 10:31:05.805436  353396 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:31:05.805441  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805447  353396 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:31:05.805452  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805456  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805465  353396 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 10:31:05.805471  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805477  353396 command_runner.go:130] >       "size":  "40636774",
	I1213 10:31:05.805485  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.805490  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.805501  353396 command_runner.go:130] >     },
	I1213 10:31:05.805504  353396 command_runner.go:130] >     {
	I1213 10:31:05.805512  353396 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:31:05.805517  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805523  353396 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:31:05.805528  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805543  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805556  353396 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:31:05.805566  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805576  353396 command_runner.go:130] >       "size":  "8034419",
	I1213 10:31:05.805580  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.805590  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.805594  353396 command_runner.go:130] >     },
	I1213 10:31:05.805601  353396 command_runner.go:130] >     {
	I1213 10:31:05.805608  353396 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:31:05.805619  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805625  353396 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:31:05.805630  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805655  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805669  353396 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 10:31:05.805675  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805680  353396 command_runner.go:130] >       "size":  "21168808",
	I1213 10:31:05.805687  353396 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:31:05.805693  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.805697  353396 command_runner.go:130] >     },
	I1213 10:31:05.805701  353396 command_runner.go:130] >     {
	I1213 10:31:05.805707  353396 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:31:05.805715  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805720  353396 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:31:05.805727  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805732  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805743  353396 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 10:31:05.805750  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805754  353396 command_runner.go:130] >       "size":  "21136588",
	I1213 10:31:05.805762  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.805772  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.805778  353396 command_runner.go:130] >       },
	I1213 10:31:05.805783  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.805787  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.805795  353396 command_runner.go:130] >     },
	I1213 10:31:05.805803  353396 command_runner.go:130] >     {
	I1213 10:31:05.805810  353396 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:31:05.805818  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805824  353396 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:31:05.805846  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805855  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805863  353396 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 10:31:05.805867  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805873  353396 command_runner.go:130] >       "size":  "24678359",
	I1213 10:31:05.805877  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.805891  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.805894  353396 command_runner.go:130] >       },
	I1213 10:31:05.805899  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.805906  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.805910  353396 command_runner.go:130] >     },
	I1213 10:31:05.805917  353396 command_runner.go:130] >     {
	I1213 10:31:05.805924  353396 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:31:05.805931  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805938  353396 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:31:05.805941  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805946  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805956  353396 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 10:31:05.805963  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805967  353396 command_runner.go:130] >       "size":  "20661043",
	I1213 10:31:05.805972  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.805979  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.805983  353396 command_runner.go:130] >       },
	I1213 10:31:05.805991  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.805995  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.806002  353396 command_runner.go:130] >     },
	I1213 10:31:05.806005  353396 command_runner.go:130] >     {
	I1213 10:31:05.806012  353396 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:31:05.806021  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.806032  353396 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:31:05.806036  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806040  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.806048  353396 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:31:05.806055  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806059  353396 command_runner.go:130] >       "size":  "22429671",
	I1213 10:31:05.806068  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.806072  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.806078  353396 command_runner.go:130] >     },
	I1213 10:31:05.806082  353396 command_runner.go:130] >     {
	I1213 10:31:05.806089  353396 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:31:05.806096  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.806101  353396 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:31:05.806109  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806113  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.806124  353396 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 10:31:05.806131  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806135  353396 command_runner.go:130] >       "size":  "15391364",
	I1213 10:31:05.806139  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.806147  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.806151  353396 command_runner.go:130] >       },
	I1213 10:31:05.806159  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.806164  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.806171  353396 command_runner.go:130] >     },
	I1213 10:31:05.806174  353396 command_runner.go:130] >     {
	I1213 10:31:05.806180  353396 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:31:05.806186  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.806191  353396 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:31:05.806197  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806202  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.806213  353396 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 10:31:05.806217  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806230  353396 command_runner.go:130] >       "size":  "267939",
	I1213 10:31:05.806238  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.806242  353396 command_runner.go:130] >         "value":  "65535"
	I1213 10:31:05.806251  353396 command_runner.go:130] >       },
	I1213 10:31:05.806255  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.806259  353396 command_runner.go:130] >       "pinned":  true
	I1213 10:31:05.806262  353396 command_runner.go:130] >     }
	I1213 10:31:05.806267  353396 command_runner.go:130] >   ]
	I1213 10:31:05.806271  353396 command_runner.go:130] > }
	I1213 10:31:05.808725  353396 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:31:05.808749  353396 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:31:05.808757  353396 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 10:31:05.808887  353396 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-652709 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:31:05.808967  353396 ssh_runner.go:195] Run: sudo crictl info
	I1213 10:31:05.831572  353396 command_runner.go:130] > {
	I1213 10:31:05.831594  353396 command_runner.go:130] >   "cniconfig": {
	I1213 10:31:05.831601  353396 command_runner.go:130] >     "Networks": [
	I1213 10:31:05.831604  353396 command_runner.go:130] >       {
	I1213 10:31:05.831609  353396 command_runner.go:130] >         "Config": {
	I1213 10:31:05.831614  353396 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1213 10:31:05.831619  353396 command_runner.go:130] >           "Name": "cni-loopback",
	I1213 10:31:05.831623  353396 command_runner.go:130] >           "Plugins": [
	I1213 10:31:05.831627  353396 command_runner.go:130] >             {
	I1213 10:31:05.831631  353396 command_runner.go:130] >               "Network": {
	I1213 10:31:05.831635  353396 command_runner.go:130] >                 "ipam": {},
	I1213 10:31:05.831641  353396 command_runner.go:130] >                 "type": "loopback"
	I1213 10:31:05.831650  353396 command_runner.go:130] >               },
	I1213 10:31:05.831662  353396 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1213 10:31:05.831670  353396 command_runner.go:130] >             }
	I1213 10:31:05.831674  353396 command_runner.go:130] >           ],
	I1213 10:31:05.831684  353396 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1213 10:31:05.831688  353396 command_runner.go:130] >         },
	I1213 10:31:05.831696  353396 command_runner.go:130] >         "IFName": "lo"
	I1213 10:31:05.831703  353396 command_runner.go:130] >       }
	I1213 10:31:05.831707  353396 command_runner.go:130] >     ],
	I1213 10:31:05.831712  353396 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1213 10:31:05.831720  353396 command_runner.go:130] >     "PluginDirs": [
	I1213 10:31:05.831724  353396 command_runner.go:130] >       "/opt/cni/bin"
	I1213 10:31:05.831731  353396 command_runner.go:130] >     ],
	I1213 10:31:05.831736  353396 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1213 10:31:05.831743  353396 command_runner.go:130] >     "Prefix": "eth"
	I1213 10:31:05.831747  353396 command_runner.go:130] >   },
	I1213 10:31:05.831754  353396 command_runner.go:130] >   "config": {
	I1213 10:31:05.831762  353396 command_runner.go:130] >     "cdiSpecDirs": [
	I1213 10:31:05.831765  353396 command_runner.go:130] >       "/etc/cdi",
	I1213 10:31:05.831781  353396 command_runner.go:130] >       "/var/run/cdi"
	I1213 10:31:05.831789  353396 command_runner.go:130] >     ],
	I1213 10:31:05.831793  353396 command_runner.go:130] >     "cni": {
	I1213 10:31:05.831797  353396 command_runner.go:130] >       "binDir": "",
	I1213 10:31:05.831801  353396 command_runner.go:130] >       "binDirs": [
	I1213 10:31:05.831810  353396 command_runner.go:130] >         "/opt/cni/bin"
	I1213 10:31:05.831814  353396 command_runner.go:130] >       ],
	I1213 10:31:05.831818  353396 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1213 10:31:05.831821  353396 command_runner.go:130] >       "confTemplate": "",
	I1213 10:31:05.831825  353396 command_runner.go:130] >       "ipPref": "",
	I1213 10:31:05.831829  353396 command_runner.go:130] >       "maxConfNum": 1,
	I1213 10:31:05.831832  353396 command_runner.go:130] >       "setupSerially": false,
	I1213 10:31:05.831837  353396 command_runner.go:130] >       "useInternalLoopback": false
	I1213 10:31:05.831840  353396 command_runner.go:130] >     },
	I1213 10:31:05.831851  353396 command_runner.go:130] >     "containerd": {
	I1213 10:31:05.831859  353396 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1213 10:31:05.831864  353396 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1213 10:31:05.831869  353396 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1213 10:31:05.831872  353396 command_runner.go:130] >       "runtimes": {
	I1213 10:31:05.831875  353396 command_runner.go:130] >         "runc": {
	I1213 10:31:05.831879  353396 command_runner.go:130] >           "ContainerAnnotations": null,
	I1213 10:31:05.831884  353396 command_runner.go:130] >           "PodAnnotations": null,
	I1213 10:31:05.831891  353396 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1213 10:31:05.831895  353396 command_runner.go:130] >           "cgroupWritable": false,
	I1213 10:31:05.831899  353396 command_runner.go:130] >           "cniConfDir": "",
	I1213 10:31:05.831905  353396 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1213 10:31:05.831910  353396 command_runner.go:130] >           "io_type": "",
	I1213 10:31:05.831919  353396 command_runner.go:130] >           "options": {
	I1213 10:31:05.831924  353396 command_runner.go:130] >             "BinaryName": "",
	I1213 10:31:05.831929  353396 command_runner.go:130] >             "CriuImagePath": "",
	I1213 10:31:05.831936  353396 command_runner.go:130] >             "CriuWorkPath": "",
	I1213 10:31:05.831940  353396 command_runner.go:130] >             "IoGid": 0,
	I1213 10:31:05.831948  353396 command_runner.go:130] >             "IoUid": 0,
	I1213 10:31:05.831953  353396 command_runner.go:130] >             "NoNewKeyring": false,
	I1213 10:31:05.831961  353396 command_runner.go:130] >             "Root": "",
	I1213 10:31:05.831965  353396 command_runner.go:130] >             "ShimCgroup": "",
	I1213 10:31:05.831970  353396 command_runner.go:130] >             "SystemdCgroup": false
	I1213 10:31:05.831992  353396 command_runner.go:130] >           },
	I1213 10:31:05.831998  353396 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1213 10:31:05.832004  353396 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1213 10:31:05.832011  353396 command_runner.go:130] >           "runtimePath": "",
	I1213 10:31:05.832017  353396 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1213 10:31:05.832025  353396 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1213 10:31:05.832030  353396 command_runner.go:130] >           "snapshotter": ""
	I1213 10:31:05.832037  353396 command_runner.go:130] >         }
	I1213 10:31:05.832040  353396 command_runner.go:130] >       }
	I1213 10:31:05.832043  353396 command_runner.go:130] >     },
	I1213 10:31:05.832055  353396 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1213 10:31:05.832065  353396 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1213 10:31:05.832073  353396 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1213 10:31:05.832081  353396 command_runner.go:130] >     "disableApparmor": false,
	I1213 10:31:05.832086  353396 command_runner.go:130] >     "disableHugetlbController": true,
	I1213 10:31:05.832093  353396 command_runner.go:130] >     "disableProcMount": false,
	I1213 10:31:05.832098  353396 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1213 10:31:05.832106  353396 command_runner.go:130] >     "enableCDI": true,
	I1213 10:31:05.832110  353396 command_runner.go:130] >     "enableSelinux": false,
	I1213 10:31:05.832118  353396 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1213 10:31:05.832123  353396 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1213 10:31:05.832131  353396 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1213 10:31:05.832135  353396 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1213 10:31:05.832140  353396 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1213 10:31:05.832144  353396 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1213 10:31:05.832151  353396 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1213 10:31:05.832157  353396 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1213 10:31:05.832165  353396 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1213 10:31:05.832171  353396 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1213 10:31:05.832180  353396 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1213 10:31:05.832185  353396 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1213 10:31:05.832192  353396 command_runner.go:130] >   },
	I1213 10:31:05.832195  353396 command_runner.go:130] >   "features": {
	I1213 10:31:05.832204  353396 command_runner.go:130] >     "supplemental_groups_policy": true
	I1213 10:31:05.832208  353396 command_runner.go:130] >   },
	I1213 10:31:05.832212  353396 command_runner.go:130] >   "golang": "go1.24.9",
	I1213 10:31:05.832222  353396 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 10:31:05.832235  353396 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 10:31:05.832240  353396 command_runner.go:130] >   "runtimeHandlers": [
	I1213 10:31:05.832245  353396 command_runner.go:130] >     {
	I1213 10:31:05.832248  353396 command_runner.go:130] >       "features": {
	I1213 10:31:05.832257  353396 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 10:31:05.832262  353396 command_runner.go:130] >         "user_namespaces": true
	I1213 10:31:05.832268  353396 command_runner.go:130] >       }
	I1213 10:31:05.832276  353396 command_runner.go:130] >     },
	I1213 10:31:05.832283  353396 command_runner.go:130] >     {
	I1213 10:31:05.832287  353396 command_runner.go:130] >       "features": {
	I1213 10:31:05.832295  353396 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 10:31:05.832299  353396 command_runner.go:130] >         "user_namespaces": true
	I1213 10:31:05.832302  353396 command_runner.go:130] >       },
	I1213 10:31:05.832307  353396 command_runner.go:130] >       "name": "runc"
	I1213 10:31:05.832310  353396 command_runner.go:130] >     }
	I1213 10:31:05.832313  353396 command_runner.go:130] >   ],
	I1213 10:31:05.832316  353396 command_runner.go:130] >   "status": {
	I1213 10:31:05.832320  353396 command_runner.go:130] >     "conditions": [
	I1213 10:31:05.832325  353396 command_runner.go:130] >       {
	I1213 10:31:05.832330  353396 command_runner.go:130] >         "message": "",
	I1213 10:31:05.832337  353396 command_runner.go:130] >         "reason": "",
	I1213 10:31:05.832344  353396 command_runner.go:130] >         "status": true,
	I1213 10:31:05.832354  353396 command_runner.go:130] >         "type": "RuntimeReady"
	I1213 10:31:05.832362  353396 command_runner.go:130] >       },
	I1213 10:31:05.832365  353396 command_runner.go:130] >       {
	I1213 10:31:05.832375  353396 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1213 10:31:05.832380  353396 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1213 10:31:05.832383  353396 command_runner.go:130] >         "status": false,
	I1213 10:31:05.832388  353396 command_runner.go:130] >         "type": "NetworkReady"
	I1213 10:31:05.832396  353396 command_runner.go:130] >       },
	I1213 10:31:05.832399  353396 command_runner.go:130] >       {
	I1213 10:31:05.832422  353396 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1213 10:31:05.832434  353396 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1213 10:31:05.832444  353396 command_runner.go:130] >         "status": false,
	I1213 10:31:05.832451  353396 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1213 10:31:05.832454  353396 command_runner.go:130] >       }
	I1213 10:31:05.832457  353396 command_runner.go:130] >     ]
	I1213 10:31:05.832461  353396 command_runner.go:130] >   }
	I1213 10:31:05.832463  353396 command_runner.go:130] > }
	I1213 10:31:05.834983  353396 cni.go:84] Creating CNI manager for ""
	I1213 10:31:05.835008  353396 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:31:05.835032  353396 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:31:05.835055  353396 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-652709 NodeName:functional-652709 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:31:05.835177  353396 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-652709"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:31:05.835253  353396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:31:05.843333  353396 command_runner.go:130] > kubeadm
	I1213 10:31:05.843355  353396 command_runner.go:130] > kubectl
	I1213 10:31:05.843360  353396 command_runner.go:130] > kubelet
	I1213 10:31:05.843375  353396 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:31:05.843451  353396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:31:05.851169  353396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 10:31:05.865230  353396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:31:05.877883  353396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 10:31:05.891827  353396 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:31:05.896023  353396 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 10:31:05.896126  353396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:31:06.037110  353396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:31:06.663693  353396 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709 for IP: 192.168.49.2
	I1213 10:31:06.663826  353396 certs.go:195] generating shared ca certs ...
	I1213 10:31:06.663858  353396 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:31:06.664061  353396 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 10:31:06.664135  353396 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 10:31:06.664169  353396 certs.go:257] generating profile certs ...
	I1213 10:31:06.664331  353396 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.key
	I1213 10:31:06.664442  353396 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key.86e7afd1
	I1213 10:31:06.664517  353396 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key
	I1213 10:31:06.664552  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 10:31:06.664592  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 10:31:06.664634  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 10:31:06.664671  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 10:31:06.664701  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 10:31:06.664745  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 10:31:06.664781  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 10:31:06.664811  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 10:31:06.664893  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 10:31:06.664965  353396 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 10:31:06.664999  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:31:06.665056  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:31:06.665113  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:31:06.665174  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 10:31:06.665258  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 10:31:06.665367  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> /usr/share/ca-certificates/3089152.pem
	I1213 10:31:06.665414  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:06.665453  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem -> /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.666083  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:31:06.686373  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:31:06.706393  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:31:06.727893  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:31:06.748376  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:31:06.769115  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:31:06.788184  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:31:06.807317  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:31:06.826240  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 10:31:06.845063  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:31:06.863130  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 10:31:06.881577  353396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:31:06.894536  353396 ssh_runner.go:195] Run: openssl version
	I1213 10:31:06.900741  353396 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 10:31:06.901231  353396 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.909107  353396 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 10:31:06.916518  353396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.920250  353396 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.920295  353396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.920347  353396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.961321  353396 command_runner.go:130] > 51391683
	I1213 10:31:06.961405  353396 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:31:06.969200  353396 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 10:31:06.976714  353396 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 10:31:06.984537  353396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 10:31:06.988716  353396 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 10:31:06.988763  353396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 10:31:06.988817  353396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 10:31:07.029862  353396 command_runner.go:130] > 3ec20f2e
	I1213 10:31:07.030284  353396 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:31:07.037958  353396 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:07.045451  353396 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:31:07.053144  353396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:07.056994  353396 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:07.057051  353396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:07.057104  353396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:07.097856  353396 command_runner.go:130] > b5213941
	I1213 10:31:07.098292  353396 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:31:07.106039  353396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:31:07.109917  353396 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:31:07.109945  353396 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 10:31:07.109953  353396 command_runner.go:130] > Device: 259,1	Inode: 3399222     Links: 1
	I1213 10:31:07.109960  353396 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:31:07.109966  353396 command_runner.go:130] > Access: 2025-12-13 10:26:59.103845116 +0000
	I1213 10:31:07.109971  353396 command_runner.go:130] > Modify: 2025-12-13 10:22:52.641441584 +0000
	I1213 10:31:07.109977  353396 command_runner.go:130] > Change: 2025-12-13 10:22:52.641441584 +0000
	I1213 10:31:07.109982  353396 command_runner.go:130] >  Birth: 2025-12-13 10:22:52.641441584 +0000
	I1213 10:31:07.110079  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:31:07.151277  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.151699  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:31:07.192420  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.192514  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:31:07.233686  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.233923  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:31:07.275302  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.275760  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:31:07.324799  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.325290  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:31:07.377047  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.377629  353396 kubeadm.go:401] StartCluster: {Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:31:07.377757  353396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 10:31:07.377843  353396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:31:07.405423  353396 cri.go:89] found id: ""
	I1213 10:31:07.405508  353396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:31:07.414529  353396 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 10:31:07.414595  353396 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 10:31:07.414615  353396 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 10:31:07.415690  353396 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:31:07.415743  353396 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:31:07.415805  353396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:31:07.423401  353396 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:31:07.423850  353396 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-652709" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:31:07.423998  353396 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-307042/kubeconfig needs updating (will repair): [kubeconfig missing "functional-652709" cluster setting kubeconfig missing "functional-652709" context setting]
	I1213 10:31:07.424313  353396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:31:07.424829  353396 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:31:07.425032  353396 kapi.go:59] client config for functional-652709: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.key", CAFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:31:07.425626  353396 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 10:31:07.425778  353396 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 10:31:07.425812  353396 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 10:31:07.425854  353396 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 10:31:07.425888  353396 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 10:31:07.425723  353396 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 10:31:07.426245  353396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:31:07.437887  353396 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 10:31:07.437960  353396 kubeadm.go:602] duration metric: took 22.197398ms to restartPrimaryControlPlane
	I1213 10:31:07.437984  353396 kubeadm.go:403] duration metric: took 60.362619ms to StartCluster
	I1213 10:31:07.438027  353396 settings.go:142] acquiring lock: {Name:mk079e9a25ebbc2c8fbae42d4c6ed096a652c00b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:31:07.438107  353396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:31:07.438874  353396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:31:07.439133  353396 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 10:31:07.439572  353396 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:31:07.439649  353396 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:31:07.439895  353396 addons.go:70] Setting storage-provisioner=true in profile "functional-652709"
	I1213 10:31:07.439924  353396 addons.go:239] Setting addon storage-provisioner=true in "functional-652709"
	I1213 10:31:07.440086  353396 host.go:66] Checking if "functional-652709" exists ...
	I1213 10:31:07.439942  353396 addons.go:70] Setting default-storageclass=true in profile "functional-652709"
	I1213 10:31:07.440166  353396 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-652709"
	I1213 10:31:07.440530  353396 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:31:07.440672  353396 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:31:07.445924  353396 out.go:179] * Verifying Kubernetes components...
	I1213 10:31:07.449291  353396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:31:07.477163  353396 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:31:07.477818  353396 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:31:07.477982  353396 kapi.go:59] client config for functional-652709: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.key", CAFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:31:07.478289  353396 addons.go:239] Setting addon default-storageclass=true in "functional-652709"
	I1213 10:31:07.478317  353396 host.go:66] Checking if "functional-652709" exists ...
	I1213 10:31:07.478815  353396 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:31:07.480787  353396 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:07.480804  353396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:31:07.480857  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:07.506052  353396 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:07.506074  353396 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:31:07.506149  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:07.532221  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:07.553427  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:07.654835  353396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:31:07.677297  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:07.691553  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:08.413950  353396 node_ready.go:35] waiting up to 6m0s for node "functional-652709" to be "Ready" ...
	I1213 10:31:08.414025  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:08.414055  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.414088  353396 retry.go:31] will retry after 345.496875ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.414094  353396 type.go:168] "Request Body" body=""
	I1213 10:31:08.414127  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:08.414139  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.414145  353396 retry.go:31] will retry after 223.686843ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.414166  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:08.414498  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:08.639014  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:08.708995  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:08.709048  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.709067  353396 retry.go:31] will retry after 375.63163ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.760277  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:08.818789  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:08.818835  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.818856  353396 retry.go:31] will retry after 406.416897ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.915066  353396 type.go:168] "Request Body" body=""
	I1213 10:31:08.915143  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:08.915484  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:09.084944  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:09.142294  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:09.145823  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.145856  353396 retry.go:31] will retry after 462.162588ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.226047  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:09.284957  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:09.285005  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.285029  353396 retry.go:31] will retry after 590.841892ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.414170  353396 type.go:168] "Request Body" body=""
	I1213 10:31:09.414270  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:09.414569  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:09.609047  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:09.669723  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:09.669808  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.669831  353396 retry.go:31] will retry after 579.936823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.876057  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:09.914654  353396 type.go:168] "Request Body" body=""
	I1213 10:31:09.914781  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:09.915113  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:09.958653  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:09.959319  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.959356  353396 retry.go:31] will retry after 607.747477ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:10.250896  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:10.320327  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:10.320375  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:10.320395  353396 retry.go:31] will retry after 1.522220042s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:10.414670  353396 type.go:168] "Request Body" body=""
	I1213 10:31:10.414776  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:10.415078  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:10.415128  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:10.567453  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:10.637133  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:10.637170  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:10.637192  353396 retry.go:31] will retry after 1.738217883s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:10.914619  353396 type.go:168] "Request Body" body=""
	I1213 10:31:10.914713  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:10.915040  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:11.414837  353396 type.go:168] "Request Body" body=""
	I1213 10:31:11.414916  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:11.415223  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:11.842893  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:11.907661  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:11.907696  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:11.907728  353396 retry.go:31] will retry after 2.533033731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:11.915037  353396 type.go:168] "Request Body" body=""
	I1213 10:31:11.915117  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:11.915423  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:12.376116  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:12.414883  353396 type.go:168] "Request Body" body=""
	I1213 10:31:12.414962  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:12.415244  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:12.415286  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:12.436301  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:12.440043  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:12.440078  353396 retry.go:31] will retry after 2.549851387s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:12.914750  353396 type.go:168] "Request Body" body=""
	I1213 10:31:12.914826  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:12.915091  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:13.414886  353396 type.go:168] "Request Body" body=""
	I1213 10:31:13.414964  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:13.415325  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:13.914980  353396 type.go:168] "Request Body" body=""
	I1213 10:31:13.915058  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:13.915431  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:14.414144  353396 type.go:168] "Request Body" body=""
	I1213 10:31:14.414226  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:14.414516  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:14.441795  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:14.521460  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:14.521500  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:14.521521  353396 retry.go:31] will retry after 3.212514963s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:14.915209  353396 type.go:168] "Request Body" body=""
	I1213 10:31:14.915291  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:14.915586  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:14.915630  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:14.990917  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:15.080462  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:15.084181  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:15.084216  353396 retry.go:31] will retry after 3.733369975s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:15.414758  353396 type.go:168] "Request Body" body=""
	I1213 10:31:15.414836  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:15.415124  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:15.914893  353396 type.go:168] "Request Body" body=""
	I1213 10:31:15.914962  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:15.915239  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:16.415068  353396 type.go:168] "Request Body" body=""
	I1213 10:31:16.415147  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:16.415460  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:16.914139  353396 type.go:168] "Request Body" body=""
	I1213 10:31:16.914218  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:16.914520  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:17.414166  353396 type.go:168] "Request Body" body=""
	I1213 10:31:17.414237  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:17.414497  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:17.414542  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:17.734589  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:17.791638  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:17.795431  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:17.795464  353396 retry.go:31] will retry after 2.280639456s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:17.914828  353396 type.go:168] "Request Body" body=""
	I1213 10:31:17.914907  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:17.915229  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:18.415056  353396 type.go:168] "Request Body" body=""
	I1213 10:31:18.415138  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:18.415477  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:18.817969  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:18.882172  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:18.882215  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:18.882235  353396 retry.go:31] will retry after 4.138686797s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:18.914321  353396 type.go:168] "Request Body" body=""
	I1213 10:31:18.914392  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:18.914663  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:19.414265  353396 type.go:168] "Request Body" body=""
	I1213 10:31:19.414351  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:19.414671  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:19.414743  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:19.914452  353396 type.go:168] "Request Body" body=""
	I1213 10:31:19.914532  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:19.914885  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:20.077334  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:20.142139  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:20.142182  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:20.142203  353396 retry.go:31] will retry after 8.217804099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:20.414481  353396 type.go:168] "Request Body" body=""
	I1213 10:31:20.414554  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:20.414845  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:20.914228  353396 type.go:168] "Request Body" body=""
	I1213 10:31:20.914302  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:20.914590  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:21.414310  353396 type.go:168] "Request Body" body=""
	I1213 10:31:21.414387  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:21.414748  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:21.414804  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:21.914112  353396 type.go:168] "Request Body" body=""
	I1213 10:31:21.914192  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:21.914465  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:22.414190  353396 type.go:168] "Request Body" body=""
	I1213 10:31:22.414276  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:22.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:22.914222  353396 type.go:168] "Request Body" body=""
	I1213 10:31:22.914304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:22.914654  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:23.021940  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:23.082413  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:23.086273  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:23.086307  353396 retry.go:31] will retry after 3.228749017s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:23.414853  353396 type.go:168] "Request Body" body=""
	I1213 10:31:23.414928  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:23.415204  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:23.415248  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:23.915086  353396 type.go:168] "Request Body" body=""
	I1213 10:31:23.915169  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:23.915500  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:24.414244  353396 type.go:168] "Request Body" body=""
	I1213 10:31:24.414323  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:24.414750  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:24.914140  353396 type.go:168] "Request Body" body=""
	I1213 10:31:24.914235  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:24.914512  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:25.414276  353396 type.go:168] "Request Body" body=""
	I1213 10:31:25.414350  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:25.414719  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:25.914418  353396 type.go:168] "Request Body" body=""
	I1213 10:31:25.914503  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:25.914851  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:25.914921  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:26.315317  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:26.370308  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:26.374436  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:26.374468  353396 retry.go:31] will retry after 6.181513775s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:26.414616  353396 type.go:168] "Request Body" body=""
	I1213 10:31:26.414702  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:26.414956  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:26.914223  353396 type.go:168] "Request Body" body=""
	I1213 10:31:26.914299  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:26.914631  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:27.414210  353396 type.go:168] "Request Body" body=""
	I1213 10:31:27.414287  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:27.414626  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:27.914667  353396 type.go:168] "Request Body" body=""
	I1213 10:31:27.914756  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:27.915024  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:27.915076  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:28.360839  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:28.414331  353396 type.go:168] "Request Body" body=""
	I1213 10:31:28.414406  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:28.414626  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:28.418709  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:28.418758  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:28.418778  353396 retry.go:31] will retry after 9.214302946s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:28.914367  353396 type.go:168] "Request Body" body=""
	I1213 10:31:28.914492  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:28.914860  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:29.414102  353396 type.go:168] "Request Body" body=""
	I1213 10:31:29.414175  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:29.414432  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:29.914164  353396 type.go:168] "Request Body" body=""
	I1213 10:31:29.914249  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:29.914544  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:30.414171  353396 type.go:168] "Request Body" body=""
	I1213 10:31:30.414256  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:30.414595  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:30.414647  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:30.914147  353396 type.go:168] "Request Body" body=""
	I1213 10:31:30.914252  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:30.914572  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:31.414262  353396 type.go:168] "Request Body" body=""
	I1213 10:31:31.414347  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:31.414732  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:31.914303  353396 type.go:168] "Request Body" body=""
	I1213 10:31:31.914387  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:31.914757  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:32.415148  353396 type.go:168] "Request Body" body=""
	I1213 10:31:32.415224  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:32.415495  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:32.415554  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:32.557021  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:32.617384  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:32.617431  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:32.617463  353396 retry.go:31] will retry after 16.934984193s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:32.914304  353396 type.go:168] "Request Body" body=""
	I1213 10:31:32.914388  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:32.914742  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:33.414206  353396 type.go:168] "Request Body" body=""
	I1213 10:31:33.414289  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:33.414637  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:33.914139  353396 type.go:168] "Request Body" body=""
	I1213 10:31:33.914219  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:33.914504  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:34.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:31:34.414324  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:34.414665  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:34.914262  353396 type.go:168] "Request Body" body=""
	I1213 10:31:34.914338  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:34.914682  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:34.914754  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:35.414981  353396 type.go:168] "Request Body" body=""
	I1213 10:31:35.415048  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:35.415330  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:35.915144  353396 type.go:168] "Request Body" body=""
	I1213 10:31:35.915224  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:35.915612  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:36.414208  353396 type.go:168] "Request Body" body=""
	I1213 10:31:36.414294  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:36.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:36.914192  353396 type.go:168] "Request Body" body=""
	I1213 10:31:36.914277  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:36.914578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:37.414237  353396 type.go:168] "Request Body" body=""
	I1213 10:31:37.414313  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:37.414629  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:37.414735  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:37.633334  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:37.695165  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:37.698650  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:37.698681  353396 retry.go:31] will retry after 9.333447966s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:37.915161  353396 type.go:168] "Request Body" body=""
	I1213 10:31:37.915240  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:37.915589  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:38.414123  353396 type.go:168] "Request Body" body=""
	I1213 10:31:38.414195  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:38.414520  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:38.914231  353396 type.go:168] "Request Body" body=""
	I1213 10:31:38.914310  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:38.914622  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:39.414370  353396 type.go:168] "Request Body" body=""
	I1213 10:31:39.414450  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:39.414771  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:39.414825  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:39.914151  353396 type.go:168] "Request Body" body=""
	I1213 10:31:39.914247  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:39.914597  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:40.414232  353396 type.go:168] "Request Body" body=""
	I1213 10:31:40.414305  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:40.414590  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:40.914274  353396 type.go:168] "Request Body" body=""
	I1213 10:31:40.914351  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:40.914714  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:41.414140  353396 type.go:168] "Request Body" body=""
	I1213 10:31:41.414213  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:41.414477  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:41.914180  353396 type.go:168] "Request Body" body=""
	I1213 10:31:41.914281  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:41.914609  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:41.914666  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:42.414204  353396 type.go:168] "Request Body" body=""
	I1213 10:31:42.414282  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:42.414600  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:42.914120  353396 type.go:168] "Request Body" body=""
	I1213 10:31:42.914194  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:42.914564  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:43.414289  353396 type.go:168] "Request Body" body=""
	I1213 10:31:43.414375  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:43.414737  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:43.914224  353396 type.go:168] "Request Body" body=""
	I1213 10:31:43.914304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:43.914640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:43.914712  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:44.414209  353396 type.go:168] "Request Body" body=""
	I1213 10:31:44.414295  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:44.414641  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:44.914233  353396 type.go:168] "Request Body" body=""
	I1213 10:31:44.914306  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:44.914657  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:45.414435  353396 type.go:168] "Request Body" body=""
	I1213 10:31:45.414551  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:45.414971  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:45.914734  353396 type.go:168] "Request Body" body=""
	I1213 10:31:45.914804  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:45.915154  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:45.915214  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:46.414939  353396 type.go:168] "Request Body" body=""
	I1213 10:31:46.415012  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:46.415313  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:46.915102  353396 type.go:168] "Request Body" body=""
	I1213 10:31:46.915186  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:46.915495  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:47.032831  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:47.089360  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:47.092850  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:47.092882  353396 retry.go:31] will retry after 14.257705184s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:47.414212  353396 type.go:168] "Request Body" body=""
	I1213 10:31:47.414287  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:47.414544  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:47.914676  353396 type.go:168] "Request Body" body=""
	I1213 10:31:47.914771  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:47.915126  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:48.414973  353396 type.go:168] "Request Body" body=""
	I1213 10:31:48.415048  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:48.415397  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:48.415453  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:48.914935  353396 type.go:168] "Request Body" body=""
	I1213 10:31:48.915016  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:48.915282  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:49.415024  353396 type.go:168] "Request Body" body=""
	I1213 10:31:49.415102  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:49.415400  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:49.552673  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:49.614333  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:49.614392  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:49.614413  353396 retry.go:31] will retry after 23.024485713s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:49.914879  353396 type.go:168] "Request Body" body=""
	I1213 10:31:49.914950  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:49.915276  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:50.415038  353396 type.go:168] "Request Body" body=""
	I1213 10:31:50.415112  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:50.415429  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:50.415489  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:50.914923  353396 type.go:168] "Request Body" body=""
	I1213 10:31:50.915005  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:50.915323  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:51.414987  353396 type.go:168] "Request Body" body=""
	I1213 10:31:51.415064  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:51.415444  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:51.915111  353396 type.go:168] "Request Body" body=""
	I1213 10:31:51.915192  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:51.915480  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:52.414202  353396 type.go:168] "Request Body" body=""
	I1213 10:31:52.414285  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:52.414620  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:52.914489  353396 type.go:168] "Request Body" body=""
	I1213 10:31:52.914562  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:52.914926  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:52.914988  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:53.414747  353396 type.go:168] "Request Body" body=""
	I1213 10:31:53.414820  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:53.415090  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:53.914866  353396 type.go:168] "Request Body" body=""
	I1213 10:31:53.914939  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:53.915273  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:54.415083  353396 type.go:168] "Request Body" body=""
	I1213 10:31:54.415160  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:54.415481  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:54.914141  353396 type.go:168] "Request Body" body=""
	I1213 10:31:54.914222  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:54.914536  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:55.414218  353396 type.go:168] "Request Body" body=""
	I1213 10:31:55.414293  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:55.414637  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:55.414730  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:55.914444  353396 type.go:168] "Request Body" body=""
	I1213 10:31:55.914529  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:55.914897  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:56.414701  353396 type.go:168] "Request Body" body=""
	I1213 10:31:56.414795  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:56.415073  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:56.914860  353396 type.go:168] "Request Body" body=""
	I1213 10:31:56.914937  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:56.915228  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:57.415017  353396 type.go:168] "Request Body" body=""
	I1213 10:31:57.415092  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:57.415406  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:57.415455  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:57.914498  353396 type.go:168] "Request Body" body=""
	I1213 10:31:57.914564  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:57.914847  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:58.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:31:58.414333  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:58.414679  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:58.914256  353396 type.go:168] "Request Body" body=""
	I1213 10:31:58.914332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:58.914624  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:59.414297  353396 type.go:168] "Request Body" body=""
	I1213 10:31:59.414370  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:59.414637  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:59.914203  353396 type.go:168] "Request Body" body=""
	I1213 10:31:59.914281  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:59.914613  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:59.914668  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:00.414938  353396 type.go:168] "Request Body" body=""
	I1213 10:32:00.415045  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:00.415391  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:00.914138  353396 type.go:168] "Request Body" body=""
	I1213 10:32:00.914218  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:00.914514  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:01.350855  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:32:01.414382  353396 type.go:168] "Request Body" body=""
	I1213 10:32:01.414452  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:01.414751  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:01.421471  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:01.421509  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:32:01.421528  353396 retry.go:31] will retry after 32.770422349s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:32:01.914172  353396 type.go:168] "Request Body" body=""
	I1213 10:32:01.914251  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:01.914603  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:02.414231  353396 type.go:168] "Request Body" body=""
	I1213 10:32:02.414337  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:02.414661  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:02.414753  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:02.914852  353396 type.go:168] "Request Body" body=""
	I1213 10:32:02.914942  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:02.915291  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:03.415068  353396 type.go:168] "Request Body" body=""
	I1213 10:32:03.415140  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:03.415560  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:03.914265  353396 type.go:168] "Request Body" body=""
	I1213 10:32:03.914365  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:03.914734  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:04.414487  353396 type.go:168] "Request Body" body=""
	I1213 10:32:04.414564  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:04.414920  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:04.414976  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:04.914751  353396 type.go:168] "Request Body" body=""
	I1213 10:32:04.914822  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:04.915267  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:05.415063  353396 type.go:168] "Request Body" body=""
	I1213 10:32:05.415138  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:05.415446  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:05.914158  353396 type.go:168] "Request Body" body=""
	I1213 10:32:05.914237  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:05.914537  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:06.414151  353396 type.go:168] "Request Body" body=""
	I1213 10:32:06.414241  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:06.414588  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:06.914189  353396 type.go:168] "Request Body" body=""
	I1213 10:32:06.914264  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:06.914626  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:06.914721  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:07.414237  353396 type.go:168] "Request Body" body=""
	I1213 10:32:07.414336  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:07.414675  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:07.914726  353396 type.go:168] "Request Body" body=""
	I1213 10:32:07.914801  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:07.915094  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:08.414945  353396 type.go:168] "Request Body" body=""
	I1213 10:32:08.415038  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:08.415395  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:08.914139  353396 type.go:168] "Request Body" body=""
	I1213 10:32:08.914221  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:08.914527  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:09.414118  353396 type.go:168] "Request Body" body=""
	I1213 10:32:09.414186  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:09.414531  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:09.414607  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:09.914205  353396 type.go:168] "Request Body" body=""
	I1213 10:32:09.914276  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:09.914632  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:10.414215  353396 type.go:168] "Request Body" body=""
	I1213 10:32:10.414292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:10.414629  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:10.914203  353396 type.go:168] "Request Body" body=""
	I1213 10:32:10.914292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:10.914593  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:11.414242  353396 type.go:168] "Request Body" body=""
	I1213 10:32:11.414348  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:11.414703  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:11.414757  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:11.914433  353396 type.go:168] "Request Body" body=""
	I1213 10:32:11.914511  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:11.914889  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:12.414571  353396 type.go:168] "Request Body" body=""
	I1213 10:32:12.414678  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:12.414978  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:12.639532  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:32:12.701723  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:12.701768  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:32:12.701788  353396 retry.go:31] will retry after 24.373252759s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:32:12.915117  353396 type.go:168] "Request Body" body=""
	I1213 10:32:12.915211  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:12.915511  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:13.414252  353396 type.go:168] "Request Body" body=""
	I1213 10:32:13.414325  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:13.414721  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:13.414794  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:13.914428  353396 type.go:168] "Request Body" body=""
	I1213 10:32:13.914518  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:13.914913  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:14.414265  353396 type.go:168] "Request Body" body=""
	I1213 10:32:14.414377  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:14.414786  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:14.914281  353396 type.go:168] "Request Body" body=""
	I1213 10:32:14.914360  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:14.914710  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:15.414252  353396 type.go:168] "Request Body" body=""
	I1213 10:32:15.414344  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:15.414630  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:15.914243  353396 type.go:168] "Request Body" body=""
	I1213 10:32:15.914331  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:15.914660  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:15.914750  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:16.414450  353396 type.go:168] "Request Body" body=""
	I1213 10:32:16.414531  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:16.414846  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:16.914165  353396 type.go:168] "Request Body" body=""
	I1213 10:32:16.914233  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:16.914541  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:17.414213  353396 type.go:168] "Request Body" body=""
	I1213 10:32:17.414341  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:17.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:17.914712  353396 type.go:168] "Request Body" body=""
	I1213 10:32:17.914803  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:17.915126  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:17.915184  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:18.414920  353396 type.go:168] "Request Body" body=""
	I1213 10:32:18.415009  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:18.415286  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:18.915162  353396 type.go:168] "Request Body" body=""
	I1213 10:32:18.915251  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:18.915598  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:19.414275  353396 type.go:168] "Request Body" body=""
	I1213 10:32:19.414357  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:19.414661  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:19.914425  353396 type.go:168] "Request Body" body=""
	I1213 10:32:19.914592  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:19.914937  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:20.414768  353396 type.go:168] "Request Body" body=""
	I1213 10:32:20.414852  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:20.415220  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:20.415278  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:20.915055  353396 type.go:168] "Request Body" body=""
	I1213 10:32:20.915156  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:20.915495  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:21.414184  353396 type.go:168] "Request Body" body=""
	I1213 10:32:21.414260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:21.414555  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:21.914250  353396 type.go:168] "Request Body" body=""
	I1213 10:32:21.914326  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:21.914677  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:22.414287  353396 type.go:168] "Request Body" body=""
	I1213 10:32:22.414370  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:22.414741  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:22.914735  353396 type.go:168] "Request Body" body=""
	I1213 10:32:22.914804  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:22.915060  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:22.915107  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:23.414877  353396 type.go:168] "Request Body" body=""
	I1213 10:32:23.414953  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:23.415252  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:23.915036  353396 type.go:168] "Request Body" body=""
	I1213 10:32:23.915115  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:23.915451  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:24.415135  353396 type.go:168] "Request Body" body=""
	I1213 10:32:24.415211  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:24.415473  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:24.914198  353396 type.go:168] "Request Body" body=""
	I1213 10:32:24.914282  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:24.914640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:25.414436  353396 type.go:168] "Request Body" body=""
	I1213 10:32:25.414514  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:25.414854  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:25.414914  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:25.914152  353396 type.go:168] "Request Body" body=""
	I1213 10:32:25.914219  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:25.914483  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:26.414214  353396 type.go:168] "Request Body" body=""
	I1213 10:32:26.414314  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:26.414636  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:26.914231  353396 type.go:168] "Request Body" body=""
	I1213 10:32:26.914307  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:26.914637  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:27.414336  353396 type.go:168] "Request Body" body=""
	I1213 10:32:27.414402  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:27.414666  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:27.914790  353396 type.go:168] "Request Body" body=""
	I1213 10:32:27.914883  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:27.915207  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:27.915256  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:28.414990  353396 type.go:168] "Request Body" body=""
	I1213 10:32:28.415074  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:28.415436  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:28.915099  353396 type.go:168] "Request Body" body=""
	I1213 10:32:28.915173  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:28.915437  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:29.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:32:29.414250  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:29.414561  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:29.914302  353396 type.go:168] "Request Body" body=""
	I1213 10:32:29.914399  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:29.914733  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:30.414157  353396 type.go:168] "Request Body" body=""
	I1213 10:32:30.414241  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:30.414552  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:30.414604  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:30.914233  353396 type.go:168] "Request Body" body=""
	I1213 10:32:30.914307  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:30.914656  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:31.414263  353396 type.go:168] "Request Body" body=""
	I1213 10:32:31.414357  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:31.414708  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:31.914203  353396 type.go:168] "Request Body" body=""
	I1213 10:32:31.914273  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:31.914531  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:32.414222  353396 type.go:168] "Request Body" body=""
	I1213 10:32:32.414304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:32.414640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:32.414727  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:32.914510  353396 type.go:168] "Request Body" body=""
	I1213 10:32:32.914599  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:32.914973  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:33.414825  353396 type.go:168] "Request Body" body=""
	I1213 10:32:33.414915  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:33.415280  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:33.915101  353396 type.go:168] "Request Body" body=""
	I1213 10:32:33.915178  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:33.915518  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:34.192937  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:32:34.265284  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:34.265320  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:34.265405  353396 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:32:34.414970  353396 type.go:168] "Request Body" body=""
	I1213 10:32:34.415052  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:34.415423  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:34.415491  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:34.914214  353396 type.go:168] "Request Body" body=""
	I1213 10:32:34.914301  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:34.914655  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:35.414268  353396 type.go:168] "Request Body" body=""
	I1213 10:32:35.414356  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:35.414678  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:35.914239  353396 type.go:168] "Request Body" body=""
	I1213 10:32:35.914322  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:35.914704  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:36.414407  353396 type.go:168] "Request Body" body=""
	I1213 10:32:36.414485  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:36.414823  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:36.914200  353396 type.go:168] "Request Body" body=""
	I1213 10:32:36.914292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:36.914625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:36.914719  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:37.076016  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:32:37.141132  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:37.141183  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:37.141286  353396 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:32:37.146231  353396 out.go:179] * Enabled addons: 
	I1213 10:32:37.149102  353396 addons.go:530] duration metric: took 1m29.709445532s for enable addons: enabled=[]
	I1213 10:32:37.414592  353396 type.go:168] "Request Body" body=""
	I1213 10:32:37.414736  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:37.415128  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:37.914163  353396 type.go:168] "Request Body" body=""
	I1213 10:32:37.914246  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:37.914580  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:38.414336  353396 type.go:168] "Request Body" body=""
	I1213 10:32:38.414415  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:38.414780  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:38.914239  353396 type.go:168] "Request Body" body=""
	I1213 10:32:38.914317  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:38.914675  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:38.914752  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:39.414390  353396 type.go:168] "Request Body" body=""
	I1213 10:32:39.414462  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:39.414811  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:39.914220  353396 type.go:168] "Request Body" body=""
	I1213 10:32:39.914296  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:39.914620  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:40.414231  353396 type.go:168] "Request Body" body=""
	I1213 10:32:40.414307  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:40.414622  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:40.914193  353396 type.go:168] "Request Body" body=""
	I1213 10:32:40.914271  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:40.914548  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:41.414251  353396 type.go:168] "Request Body" body=""
	I1213 10:32:41.414348  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:41.414708  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:41.414763  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:41.914229  353396 type.go:168] "Request Body" body=""
	I1213 10:32:41.914327  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:41.914643  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:42.414161  353396 type.go:168] "Request Body" body=""
	I1213 10:32:42.414248  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:42.414516  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:42.914567  353396 type.go:168] "Request Body" body=""
	I1213 10:32:42.914643  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:42.914974  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:43.414788  353396 type.go:168] "Request Body" body=""
	I1213 10:32:43.414863  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:43.415192  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:43.415248  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:43.915667  353396 type.go:168] "Request Body" body=""
	I1213 10:32:43.915743  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:43.916016  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:44.414833  353396 type.go:168] "Request Body" body=""
	I1213 10:32:44.414913  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:44.415264  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:44.915103  353396 type.go:168] "Request Body" body=""
	I1213 10:32:44.915182  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:44.915522  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:45.414185  353396 type.go:168] "Request Body" body=""
	I1213 10:32:45.414262  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:45.414578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:45.914231  353396 type.go:168] "Request Body" body=""
	I1213 10:32:45.914307  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:45.914655  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:45.914730  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:46.414248  353396 type.go:168] "Request Body" body=""
	I1213 10:32:46.414348  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:46.414706  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:46.914404  353396 type.go:168] "Request Body" body=""
	I1213 10:32:46.914482  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:46.914848  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:47.414245  353396 type.go:168] "Request Body" body=""
	I1213 10:32:47.414332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:47.414670  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:47.915115  353396 type.go:168] "Request Body" body=""
	I1213 10:32:47.915188  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:47.915496  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:47.915548  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:48.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:32:48.414231  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:48.414501  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:48.914202  353396 type.go:168] "Request Body" body=""
	I1213 10:32:48.914276  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:48.914656  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:49.414387  353396 type.go:168] "Request Body" body=""
	I1213 10:32:49.414468  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:49.414814  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:49.914540  353396 type.go:168] "Request Body" body=""
	I1213 10:32:49.914615  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:49.914986  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:50.414789  353396 type.go:168] "Request Body" body=""
	I1213 10:32:50.414867  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:50.415215  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:50.415272  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:50.915036  353396 type.go:168] "Request Body" body=""
	I1213 10:32:50.915111  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:50.915455  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:51.414115  353396 type.go:168] "Request Body" body=""
	I1213 10:32:51.414190  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:51.414454  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:51.914146  353396 type.go:168] "Request Body" body=""
	I1213 10:32:51.914227  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:51.914572  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:52.414290  353396 type.go:168] "Request Body" body=""
	I1213 10:32:52.414382  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:52.414734  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:52.914517  353396 type.go:168] "Request Body" body=""
	I1213 10:32:52.914591  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:52.914875  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:52.914926  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:53.414207  353396 type.go:168] "Request Body" body=""
	I1213 10:32:53.414292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:53.414618  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:53.914425  353396 type.go:168] "Request Body" body=""
	I1213 10:32:53.914515  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:53.914900  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:54.414164  353396 type.go:168] "Request Body" body=""
	I1213 10:32:54.414246  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:54.414585  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:54.915092  353396 type.go:168] "Request Body" body=""
	I1213 10:32:54.915167  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:54.915487  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:54.915545  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:55.414202  353396 type.go:168] "Request Body" body=""
	I1213 10:32:55.414280  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:55.414623  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:55.914337  353396 type.go:168] "Request Body" body=""
	I1213 10:32:55.914403  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:55.914665  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:56.415120  353396 type.go:168] "Request Body" body=""
	I1213 10:32:56.415206  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:56.415536  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:56.914238  353396 type.go:168] "Request Body" body=""
	I1213 10:32:56.914316  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:56.914647  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:57.414233  353396 type.go:168] "Request Body" body=""
	I1213 10:32:57.414314  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:57.414566  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:57.414610  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:57.914675  353396 type.go:168] "Request Body" body=""
	I1213 10:32:57.914760  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:57.915078  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:58.414843  353396 type.go:168] "Request Body" body=""
	I1213 10:32:58.414921  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:58.415260  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:58.914928  353396 type.go:168] "Request Body" body=""
	I1213 10:32:58.914994  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:58.915260  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:59.414997  353396 type.go:168] "Request Body" body=""
	I1213 10:32:59.415070  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:59.415409  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:59.415463  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:59.915087  353396 type.go:168] "Request Body" body=""
	I1213 10:32:59.915169  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:59.915509  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:00.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:33:00.414313  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:00.414605  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:00.914240  353396 type.go:168] "Request Body" body=""
	I1213 10:33:00.914313  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:00.914656  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:01.414407  353396 type.go:168] "Request Body" body=""
	I1213 10:33:01.414488  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:01.414812  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:01.914174  353396 type.go:168] "Request Body" body=""
	I1213 10:33:01.914242  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:01.914578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:01.914642  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:02.414226  353396 type.go:168] "Request Body" body=""
	I1213 10:33:02.414314  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:02.414834  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:02.914840  353396 type.go:168] "Request Body" body=""
	I1213 10:33:02.914918  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:02.915280  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:03.415005  353396 type.go:168] "Request Body" body=""
	I1213 10:33:03.415071  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:03.415330  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:03.915080  353396 type.go:168] "Request Body" body=""
	I1213 10:33:03.915153  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:03.915513  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:03.915572  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:04.414115  353396 type.go:168] "Request Body" body=""
	I1213 10:33:04.414198  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:04.414530  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:04.914186  353396 type.go:168] "Request Body" body=""
	I1213 10:33:04.914260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:04.914545  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:05.414244  353396 type.go:168] "Request Body" body=""
	I1213 10:33:05.414331  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:05.414650  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:05.914534  353396 type.go:168] "Request Body" body=""
	I1213 10:33:05.914636  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:05.915200  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:06.414237  353396 type.go:168] "Request Body" body=""
	I1213 10:33:06.414329  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:06.414755  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:06.414814  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:06.914236  353396 type.go:168] "Request Body" body=""
	I1213 10:33:06.914317  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:06.914747  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:07.414280  353396 type.go:168] "Request Body" body=""
	I1213 10:33:07.414359  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:07.414723  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:07.914678  353396 type.go:168] "Request Body" body=""
	I1213 10:33:07.914764  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:07.915020  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:08.414786  353396 type.go:168] "Request Body" body=""
	I1213 10:33:08.414861  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:08.415237  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:08.415311  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:08.914933  353396 type.go:168] "Request Body" body=""
	I1213 10:33:08.915016  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:08.915363  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:09.415090  353396 type.go:168] "Request Body" body=""
	I1213 10:33:09.415163  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:09.415497  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:09.914209  353396 type.go:168] "Request Body" body=""
	I1213 10:33:09.914284  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:09.914628  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:10.414238  353396 type.go:168] "Request Body" body=""
	I1213 10:33:10.414322  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:10.414661  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:10.914394  353396 type.go:168] "Request Body" body=""
	I1213 10:33:10.914479  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:10.914797  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:10.914865  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:11.414499  353396 type.go:168] "Request Body" body=""
	I1213 10:33:11.414573  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:11.414931  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:11.914532  353396 type.go:168] "Request Body" body=""
	I1213 10:33:11.914611  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:11.914966  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:12.414801  353396 type.go:168] "Request Body" body=""
	I1213 10:33:12.414868  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:12.415171  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:12.915004  353396 type.go:168] "Request Body" body=""
	I1213 10:33:12.915081  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:12.915417  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:12.915470  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:13.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:33:13.414242  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:13.414579  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:13.914271  353396 type.go:168] "Request Body" body=""
	I1213 10:33:13.914343  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:13.914614  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:14.414244  353396 type.go:168] "Request Body" body=""
	I1213 10:33:14.414327  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:14.414733  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:14.914296  353396 type.go:168] "Request Body" body=""
	I1213 10:33:14.914374  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:14.914755  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:15.414445  353396 type.go:168] "Request Body" body=""
	I1213 10:33:15.414516  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:15.414826  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:15.414874  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:15.914238  353396 type.go:168] "Request Body" body=""
	I1213 10:33:15.914315  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:15.914667  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:16.414229  353396 type.go:168] "Request Body" body=""
	I1213 10:33:16.414308  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:16.414633  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:16.914167  353396 type.go:168] "Request Body" body=""
	I1213 10:33:16.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:16.914576  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:17.414281  353396 type.go:168] "Request Body" body=""
	I1213 10:33:17.414356  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:17.414750  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:17.914808  353396 type.go:168] "Request Body" body=""
	I1213 10:33:17.914886  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:17.915216  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:17.915272  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:18.414973  353396 type.go:168] "Request Body" body=""
	I1213 10:33:18.415047  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:18.415307  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:18.915151  353396 type.go:168] "Request Body" body=""
	I1213 10:33:18.915226  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:18.915625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:19.414335  353396 type.go:168] "Request Body" body=""
	I1213 10:33:19.414419  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:19.414759  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:19.914166  353396 type.go:168] "Request Body" body=""
	I1213 10:33:19.914245  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:19.914568  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:20.414186  353396 type.go:168] "Request Body" body=""
	I1213 10:33:20.414272  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:20.414597  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:20.414654  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:20.914206  353396 type.go:168] "Request Body" body=""
	I1213 10:33:20.914282  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:20.914621  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:21.414132  353396 type.go:168] "Request Body" body=""
	I1213 10:33:21.414208  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:21.414499  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:21.914197  353396 type.go:168] "Request Body" body=""
	I1213 10:33:21.914272  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:21.914622  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:22.414213  353396 type.go:168] "Request Body" body=""
	I1213 10:33:22.414288  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:22.414631  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:22.414714  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:22.914531  353396 type.go:168] "Request Body" body=""
	I1213 10:33:22.914600  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:22.914881  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:23.414582  353396 type.go:168] "Request Body" body=""
	I1213 10:33:23.414669  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:23.415069  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:23.914895  353396 type.go:168] "Request Body" body=""
	I1213 10:33:23.914973  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:23.915336  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:24.415103  353396 type.go:168] "Request Body" body=""
	I1213 10:33:24.415180  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:24.415512  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:24.415578  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:24.914219  353396 type.go:168] "Request Body" body=""
	I1213 10:33:24.914295  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:24.914635  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:25.414345  353396 type.go:168] "Request Body" body=""
	I1213 10:33:25.414421  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:25.414761  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:25.914311  353396 type.go:168] "Request Body" body=""
	I1213 10:33:25.914420  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:25.914777  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:26.414223  353396 type.go:168] "Request Body" body=""
	I1213 10:33:26.414296  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:26.414589  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:26.914215  353396 type.go:168] "Request Body" body=""
	I1213 10:33:26.914296  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:26.914594  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:26.914643  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:27.414265  353396 type.go:168] "Request Body" body=""
	I1213 10:33:27.414341  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:27.414652  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:27.914800  353396 type.go:168] "Request Body" body=""
	I1213 10:33:27.914879  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:27.915203  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:28.415013  353396 type.go:168] "Request Body" body=""
	I1213 10:33:28.415091  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:28.415415  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:28.914142  353396 type.go:168] "Request Body" body=""
	I1213 10:33:28.914234  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:28.914563  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:29.415186  353396 type.go:168] "Request Body" body=""
	I1213 10:33:29.415270  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:29.415654  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:29.415711  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:29.914406  353396 type.go:168] "Request Body" body=""
	I1213 10:33:29.914489  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:29.914881  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:30.414106  353396 type.go:168] "Request Body" body=""
	I1213 10:33:30.414182  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:30.414441  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:30.914192  353396 type.go:168] "Request Body" body=""
	I1213 10:33:30.914277  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:30.914639  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:31.414415  353396 type.go:168] "Request Body" body=""
	I1213 10:33:31.414504  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:31.414860  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:31.914558  353396 type.go:168] "Request Body" body=""
	I1213 10:33:31.914652  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:31.915115  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:31.915181  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:32.414968  353396 type.go:168] "Request Body" body=""
	I1213 10:33:32.415066  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:32.415412  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:32.914133  353396 type.go:168] "Request Body" body=""
	I1213 10:33:32.914209  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:32.914503  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:33.414182  353396 type.go:168] "Request Body" body=""
	I1213 10:33:33.414252  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:33.414521  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:33.914275  353396 type.go:168] "Request Body" body=""
	I1213 10:33:33.914356  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:33.914658  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:34.414225  353396 type.go:168] "Request Body" body=""
	I1213 10:33:34.414306  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:34.414645  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:34.414731  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:34.915069  353396 type.go:168] "Request Body" body=""
	I1213 10:33:34.915139  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:34.915398  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:35.415186  353396 type.go:168] "Request Body" body=""
	I1213 10:33:35.415276  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:35.415605  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:35.914324  353396 type.go:168] "Request Body" body=""
	I1213 10:33:35.914403  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:35.914770  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:36.414183  353396 type.go:168] "Request Body" body=""
	I1213 10:33:36.414258  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:36.414589  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:36.914281  353396 type.go:168] "Request Body" body=""
	I1213 10:33:36.914356  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:36.914674  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:36.914747  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:37.414252  353396 type.go:168] "Request Body" body=""
	I1213 10:33:37.414329  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:37.414672  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:37.914724  353396 type.go:168] "Request Body" body=""
	I1213 10:33:37.914810  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:37.915057  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:38.414913  353396 type.go:168] "Request Body" body=""
	I1213 10:33:38.414995  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:38.415346  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:38.915031  353396 type.go:168] "Request Body" body=""
	I1213 10:33:38.915152  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:38.915474  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:38.915537  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:39.414149  353396 type.go:168] "Request Body" body=""
	I1213 10:33:39.414223  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:39.414489  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:39.914229  353396 type.go:168] "Request Body" body=""
	I1213 10:33:39.914316  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:39.914708  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:40.414421  353396 type.go:168] "Request Body" body=""
	I1213 10:33:40.414505  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:40.414841  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:40.914176  353396 type.go:168] "Request Body" body=""
	I1213 10:33:40.914251  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:40.914547  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:41.414225  353396 type.go:168] "Request Body" body=""
	I1213 10:33:41.414301  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:41.414638  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:41.414716  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:41.914232  353396 type.go:168] "Request Body" body=""
	I1213 10:33:41.914312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:41.914726  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:42.414413  353396 type.go:168] "Request Body" body=""
	I1213 10:33:42.414502  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:42.414788  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:42.914738  353396 type.go:168] "Request Body" body=""
	I1213 10:33:42.914819  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:42.915151  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:43.414956  353396 type.go:168] "Request Body" body=""
	I1213 10:33:43.415050  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:43.415390  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:43.415447  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:43.914096  353396 type.go:168] "Request Body" body=""
	I1213 10:33:43.914175  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:43.914452  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:44.414189  353396 type.go:168] "Request Body" body=""
	I1213 10:33:44.414299  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:44.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:44.914233  353396 type.go:168] "Request Body" body=""
	I1213 10:33:44.914313  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:44.914681  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:45.414148  353396 type.go:168] "Request Body" body=""
	I1213 10:33:45.414250  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:45.414576  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:45.914408  353396 type.go:168] "Request Body" body=""
	I1213 10:33:45.914483  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:45.914847  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:45.914902  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:46.414598  353396 type.go:168] "Request Body" body=""
	I1213 10:33:46.414675  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:46.415085  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:46.914922  353396 type.go:168] "Request Body" body=""
	I1213 10:33:46.915000  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:46.915300  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:47.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:33:47.414249  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:47.414598  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:47.914753  353396 type.go:168] "Request Body" body=""
	I1213 10:33:47.914829  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:47.915132  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:47.915181  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:48.414845  353396 type.go:168] "Request Body" body=""
	I1213 10:33:48.414950  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:48.415268  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:48.914972  353396 type.go:168] "Request Body" body=""
	I1213 10:33:48.915042  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:48.915396  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:49.415067  353396 type.go:168] "Request Body" body=""
	I1213 10:33:49.415147  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:49.415484  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:49.914172  353396 type.go:168] "Request Body" body=""
	I1213 10:33:49.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:49.914579  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:50.414275  353396 type.go:168] "Request Body" body=""
	I1213 10:33:50.414359  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:50.414679  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:50.414752  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:50.914234  353396 type.go:168] "Request Body" body=""
	I1213 10:33:50.914312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:50.914673  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:51.414220  353396 type.go:168] "Request Body" body=""
	I1213 10:33:51.414286  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:51.414593  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:51.914230  353396 type.go:168] "Request Body" body=""
	I1213 10:33:51.914311  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:51.914660  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:52.414409  353396 type.go:168] "Request Body" body=""
	I1213 10:33:52.414499  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:52.414831  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:52.414892  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:52.914704  353396 type.go:168] "Request Body" body=""
	I1213 10:33:52.914782  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:52.915049  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:53.414824  353396 type.go:168] "Request Body" body=""
	I1213 10:33:53.414900  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:53.415223  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:53.915049  353396 type.go:168] "Request Body" body=""
	I1213 10:33:53.915127  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:53.915475  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:54.415020  353396 type.go:168] "Request Body" body=""
	I1213 10:33:54.415131  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:54.415393  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:54.415434  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:54.914119  353396 type.go:168] "Request Body" body=""
	I1213 10:33:54.914214  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:54.914516  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:55.414234  353396 type.go:168] "Request Body" body=""
	I1213 10:33:55.414310  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:55.414632  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:55.914184  353396 type.go:168] "Request Body" body=""
	I1213 10:33:55.914266  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:55.914529  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:56.414289  353396 type.go:168] "Request Body" body=""
	I1213 10:33:56.414370  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:56.414757  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:56.914479  353396 type.go:168] "Request Body" body=""
	I1213 10:33:56.914560  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:56.914914  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:56.914974  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:57.414182  353396 type.go:168] "Request Body" body=""
	I1213 10:33:57.414256  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:57.414574  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:57.914733  353396 type.go:168] "Request Body" body=""
	I1213 10:33:57.914817  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:57.915173  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:58.414963  353396 type.go:168] "Request Body" body=""
	I1213 10:33:58.415038  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:58.415384  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:58.915098  353396 type.go:168] "Request Body" body=""
	I1213 10:33:58.915166  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:58.915457  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:58.915498  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:59.414217  353396 type.go:168] "Request Body" body=""
	I1213 10:33:59.414293  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:59.414619  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:59.914358  353396 type.go:168] "Request Body" body=""
	I1213 10:33:59.914442  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:59.914849  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:00.414213  353396 type.go:168] "Request Body" body=""
	I1213 10:34:00.414339  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:00.414709  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:00.914229  353396 type.go:168] "Request Body" body=""
	I1213 10:34:00.914306  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:00.914634  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:01.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:34:01.414315  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:01.414624  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:01.414672  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:01.914168  353396 type.go:168] "Request Body" body=""
	I1213 10:34:01.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:01.914585  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:02.414240  353396 type.go:168] "Request Body" body=""
	I1213 10:34:02.414320  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:02.414671  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:02.914495  353396 type.go:168] "Request Body" body=""
	I1213 10:34:02.914572  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:02.914905  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:03.414563  353396 type.go:168] "Request Body" body=""
	I1213 10:34:03.414642  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:03.414937  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:03.414981  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:03.914802  353396 type.go:168] "Request Body" body=""
	I1213 10:34:03.914886  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:03.915200  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:04.415061  353396 type.go:168] "Request Body" body=""
	I1213 10:34:04.415173  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:04.415604  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:04.915045  353396 type.go:168] "Request Body" body=""
	I1213 10:34:04.915117  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:04.915454  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:05.414181  353396 type.go:168] "Request Body" body=""
	I1213 10:34:05.414260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:05.414598  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:05.914312  353396 type.go:168] "Request Body" body=""
	I1213 10:34:05.914397  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:05.914761  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:05.914818  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:06.414172  353396 type.go:168] "Request Body" body=""
	I1213 10:34:06.414246  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:06.414578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:06.914215  353396 type.go:168] "Request Body" body=""
	I1213 10:34:06.914294  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:06.914638  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:07.414373  353396 type.go:168] "Request Body" body=""
	I1213 10:34:07.414449  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:07.414801  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:07.914926  353396 type.go:168] "Request Body" body=""
	I1213 10:34:07.914993  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:07.915307  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:07.915360  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:08.415127  353396 type.go:168] "Request Body" body=""
	I1213 10:34:08.415205  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:08.415596  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:08.914374  353396 type.go:168] "Request Body" body=""
	I1213 10:34:08.914456  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:08.914801  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:09.414148  353396 type.go:168] "Request Body" body=""
	I1213 10:34:09.414219  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:09.414479  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:09.914227  353396 type.go:168] "Request Body" body=""
	I1213 10:34:09.914306  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:09.914661  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:10.414240  353396 type.go:168] "Request Body" body=""
	I1213 10:34:10.414319  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:10.414680  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:10.414778  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:10.918812  353396 type.go:168] "Request Body" body=""
	I1213 10:34:10.918890  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:10.919160  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:11.415030  353396 type.go:168] "Request Body" body=""
	I1213 10:34:11.415107  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:11.415436  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:11.914150  353396 type.go:168] "Request Body" body=""
	I1213 10:34:11.914232  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:11.914571  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:12.415071  353396 type.go:168] "Request Body" body=""
	I1213 10:34:12.415146  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:12.415421  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:12.415479  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:12.914213  353396 type.go:168] "Request Body" body=""
	I1213 10:34:12.914288  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:12.914622  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:13.414338  353396 type.go:168] "Request Body" body=""
	I1213 10:34:13.414421  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:13.414784  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:13.914194  353396 type.go:168] "Request Body" body=""
	I1213 10:34:13.914270  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:13.914538  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:14.414217  353396 type.go:168] "Request Body" body=""
	I1213 10:34:14.414294  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:14.414624  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:14.914210  353396 type.go:168] "Request Body" body=""
	I1213 10:34:14.914290  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:14.914590  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:14.914639  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:15.414121  353396 type.go:168] "Request Body" body=""
	I1213 10:34:15.414260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:15.414569  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:15.914203  353396 type.go:168] "Request Body" body=""
	I1213 10:34:15.914284  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:15.914613  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:16.414225  353396 type.go:168] "Request Body" body=""
	I1213 10:34:16.414308  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:16.414648  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:16.914359  353396 type.go:168] "Request Body" body=""
	I1213 10:34:16.914447  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:16.914753  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:16.914798  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:17.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:34:17.414312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:17.414646  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:17.914569  353396 type.go:168] "Request Body" body=""
	I1213 10:34:17.914646  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:17.914997  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:18.414794  353396 type.go:168] "Request Body" body=""
	I1213 10:34:18.414864  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:18.415130  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:18.914878  353396 type.go:168] "Request Body" body=""
	I1213 10:34:18.914956  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:18.915256  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:18.915309  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:19.415048  353396 type.go:168] "Request Body" body=""
	I1213 10:34:19.415124  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:19.415473  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:19.914155  353396 type.go:168] "Request Body" body=""
	I1213 10:34:19.914239  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:19.914557  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:20.414216  353396 type.go:168] "Request Body" body=""
	I1213 10:34:20.414293  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:20.414595  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:20.914298  353396 type.go:168] "Request Body" body=""
	I1213 10:34:20.914378  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:20.914742  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:21.414175  353396 type.go:168] "Request Body" body=""
	I1213 10:34:21.414247  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:21.414574  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:21.414628  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:21.914278  353396 type.go:168] "Request Body" body=""
	I1213 10:34:21.914361  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:21.914745  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:22.414284  353396 type.go:168] "Request Body" body=""
	I1213 10:34:22.414361  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:22.414747  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:22.914549  353396 type.go:168] "Request Body" body=""
	I1213 10:34:22.914626  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:22.914988  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:23.414779  353396 type.go:168] "Request Body" body=""
	I1213 10:34:23.414855  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:23.415214  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:23.415277  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:23.915088  353396 type.go:168] "Request Body" body=""
	I1213 10:34:23.915170  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:23.915507  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:24.414168  353396 type.go:168] "Request Body" body=""
	I1213 10:34:24.414241  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:24.414497  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:24.914176  353396 type.go:168] "Request Body" body=""
	I1213 10:34:24.914250  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:24.914580  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:25.414317  353396 type.go:168] "Request Body" body=""
	I1213 10:34:25.414397  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:25.414758  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:25.914443  353396 type.go:168] "Request Body" body=""
	I1213 10:34:25.914516  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:25.914878  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:25.914936  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:26.414193  353396 type.go:168] "Request Body" body=""
	I1213 10:34:26.414269  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:26.414575  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:26.914218  353396 type.go:168] "Request Body" body=""
	I1213 10:34:26.914293  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:26.914611  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:27.414157  353396 type.go:168] "Request Body" body=""
	I1213 10:34:27.414224  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:27.414475  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:27.914651  353396 type.go:168] "Request Body" body=""
	I1213 10:34:27.914747  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:27.915082  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:27.915143  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:28.414747  353396 type.go:168] "Request Body" body=""
	I1213 10:34:28.414831  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:28.415166  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:28.914918  353396 type.go:168] "Request Body" body=""
	I1213 10:34:28.914994  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:28.915317  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:29.415099  353396 type.go:168] "Request Body" body=""
	I1213 10:34:29.415182  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:29.415527  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:29.914143  353396 type.go:168] "Request Body" body=""
	I1213 10:34:29.914235  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:29.914632  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:30.414347  353396 type.go:168] "Request Body" body=""
	I1213 10:34:30.414415  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:30.414708  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:30.414755  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:30.914237  353396 type.go:168] "Request Body" body=""
	I1213 10:34:30.914320  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:30.914657  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:31.414414  353396 type.go:168] "Request Body" body=""
	I1213 10:34:31.414503  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:31.414889  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:31.914157  353396 type.go:168] "Request Body" body=""
	I1213 10:34:31.914230  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:31.914496  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:32.414209  353396 type.go:168] "Request Body" body=""
	I1213 10:34:32.414292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:32.414648  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:32.914128  353396 type.go:168] "Request Body" body=""
	I1213 10:34:32.914211  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:32.914560  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:32.914616  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:33.414256  353396 type.go:168] "Request Body" body=""
	I1213 10:34:33.414326  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:33.414617  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:33.914297  353396 type.go:168] "Request Body" body=""
	I1213 10:34:33.914377  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:33.914762  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:34.414238  353396 type.go:168] "Request Body" body=""
	I1213 10:34:34.414315  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:34.414643  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:34.914151  353396 type.go:168] "Request Body" body=""
	I1213 10:34:34.914224  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:34.914486  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:35.414223  353396 type.go:168] "Request Body" body=""
	I1213 10:34:35.414304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:35.414642  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:35.414735  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:35.914235  353396 type.go:168] "Request Body" body=""
	I1213 10:34:35.914320  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:35.914658  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:36.414261  353396 type.go:168] "Request Body" body=""
	I1213 10:34:36.414332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:36.414605  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:36.914211  353396 type.go:168] "Request Body" body=""
	I1213 10:34:36.914285  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:36.914640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:37.414211  353396 type.go:168] "Request Body" body=""
	I1213 10:34:37.414289  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:37.414584  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:37.914675  353396 type.go:168] "Request Body" body=""
	I1213 10:34:37.914757  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:37.915023  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:37.915064  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:38.414903  353396 type.go:168] "Request Body" body=""
	I1213 10:34:38.414986  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:38.415396  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:38.914137  353396 type.go:168] "Request Body" body=""
	I1213 10:34:38.914223  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:38.914580  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:39.414172  353396 type.go:168] "Request Body" body=""
	I1213 10:34:39.414253  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:39.414582  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:39.914286  353396 type.go:168] "Request Body" body=""
	I1213 10:34:39.914363  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:39.914715  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:40.414232  353396 type.go:168] "Request Body" body=""
	I1213 10:34:40.414314  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:40.414677  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:40.414753  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:40.914094  353396 type.go:168] "Request Body" body=""
	I1213 10:34:40.914175  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:40.914491  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:41.414243  353396 type.go:168] "Request Body" body=""
	I1213 10:34:41.414321  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:41.414666  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:41.914412  353396 type.go:168] "Request Body" body=""
	I1213 10:34:41.914495  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:41.914870  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:42.414297  353396 type.go:168] "Request Body" body=""
	I1213 10:34:42.414371  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:42.414633  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:42.914585  353396 type.go:168] "Request Body" body=""
	I1213 10:34:42.914668  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:42.915024  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:42.915079  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:43.414607  353396 type.go:168] "Request Body" body=""
	I1213 10:34:43.414702  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:43.415071  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:43.914792  353396 type.go:168] "Request Body" body=""
	I1213 10:34:43.914869  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:43.915208  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:44.415017  353396 type.go:168] "Request Body" body=""
	I1213 10:34:44.415093  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:44.415470  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:44.915253  353396 type.go:168] "Request Body" body=""
	I1213 10:34:44.915329  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:44.915668  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:44.915722  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:45.414372  353396 type.go:168] "Request Body" body=""
	I1213 10:34:45.414449  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:45.414746  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:45.914236  353396 type.go:168] "Request Body" body=""
	I1213 10:34:45.914316  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:45.914655  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:46.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:34:46.414322  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:46.414658  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:46.915158  353396 type.go:168] "Request Body" body=""
	I1213 10:34:46.915231  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:46.915495  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:47.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:34:47.414242  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:47.414552  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:47.414603  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:47.914533  353396 type.go:168] "Request Body" body=""
	I1213 10:34:47.914615  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:47.914992  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:48.414726  353396 type.go:168] "Request Body" body=""
	I1213 10:34:48.414795  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:48.415059  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:48.914847  353396 type.go:168] "Request Body" body=""
	I1213 10:34:48.914935  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:48.915268  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:49.415068  353396 type.go:168] "Request Body" body=""
	I1213 10:34:49.415159  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:49.415526  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:49.415582  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:49.914165  353396 type.go:168] "Request Body" body=""
	I1213 10:34:49.914239  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:49.914499  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:50.414183  353396 type.go:168] "Request Body" body=""
	I1213 10:34:50.414258  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:50.414554  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:50.914233  353396 type.go:168] "Request Body" body=""
	I1213 10:34:50.914307  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:50.914623  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:51.414141  353396 type.go:168] "Request Body" body=""
	I1213 10:34:51.414231  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:51.414525  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:51.914233  353396 type.go:168] "Request Body" body=""
	I1213 10:34:51.914311  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:51.914675  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:51.914750  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:52.414266  353396 type.go:168] "Request Body" body=""
	I1213 10:34:52.414347  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:52.414711  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:52.914454  353396 type.go:168] "Request Body" body=""
	I1213 10:34:52.914525  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:52.914819  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:53.414527  353396 type.go:168] "Request Body" body=""
	I1213 10:34:53.414603  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:53.414939  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:53.914755  353396 type.go:168] "Request Body" body=""
	I1213 10:34:53.914832  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:53.915171  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:53.915227  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:54.414953  353396 type.go:168] "Request Body" body=""
	I1213 10:34:54.415021  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:54.415337  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:54.915118  353396 type.go:168] "Request Body" body=""
	I1213 10:34:54.915194  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:54.915521  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:55.414227  353396 type.go:168] "Request Body" body=""
	I1213 10:34:55.414310  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:55.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:55.914335  353396 type.go:168] "Request Body" body=""
	I1213 10:34:55.914406  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:55.914763  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:56.414221  353396 type.go:168] "Request Body" body=""
	I1213 10:34:56.414295  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:56.414641  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:56.414726  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:56.914215  353396 type.go:168] "Request Body" body=""
	I1213 10:34:56.914290  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:56.914634  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:57.415117  353396 type.go:168] "Request Body" body=""
	I1213 10:34:57.415188  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:57.415448  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:57.914627  353396 type.go:168] "Request Body" body=""
	I1213 10:34:57.914722  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:57.915055  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:58.414842  353396 type.go:168] "Request Body" body=""
	I1213 10:34:58.414915  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:58.415239  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:58.415298  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:58.915010  353396 type.go:168] "Request Body" body=""
	I1213 10:34:58.915077  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:58.915339  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:59.414106  353396 type.go:168] "Request Body" body=""
	I1213 10:34:59.414182  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:59.414535  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:59.914218  353396 type.go:168] "Request Body" body=""
	I1213 10:34:59.914297  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:59.914630  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:00.414234  353396 type.go:168] "Request Body" body=""
	I1213 10:35:00.414329  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:00.414642  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:00.914218  353396 type.go:168] "Request Body" body=""
	I1213 10:35:00.914294  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:00.914620  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:00.914708  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:01.414284  353396 type.go:168] "Request Body" body=""
	I1213 10:35:01.414392  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:01.414774  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:01.914191  353396 type.go:168] "Request Body" body=""
	I1213 10:35:01.914265  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:01.914561  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:02.414257  353396 type.go:168] "Request Body" body=""
	I1213 10:35:02.414340  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:02.414636  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:02.914575  353396 type.go:168] "Request Body" body=""
	I1213 10:35:02.914650  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:02.914985  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:02.915031  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:03.414733  353396 type.go:168] "Request Body" body=""
	I1213 10:35:03.414804  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:03.415061  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:03.914909  353396 type.go:168] "Request Body" body=""
	I1213 10:35:03.914993  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:03.915318  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:04.415148  353396 type.go:168] "Request Body" body=""
	I1213 10:35:04.415227  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:04.415569  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:04.914256  353396 type.go:168] "Request Body" body=""
	I1213 10:35:04.914332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:04.914597  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:05.414227  353396 type.go:168] "Request Body" body=""
	I1213 10:35:05.414308  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:05.414640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:05.414721  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:05.914245  353396 type.go:168] "Request Body" body=""
	I1213 10:35:05.914320  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:05.914676  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:06.414484  353396 type.go:168] "Request Body" body=""
	I1213 10:35:06.414568  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:06.415045  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:06.914814  353396 type.go:168] "Request Body" body=""
	I1213 10:35:06.914901  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:06.915246  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:07.415065  353396 type.go:168] "Request Body" body=""
	I1213 10:35:07.415153  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:07.415494  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:07.415553  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:07.914641  353396 type.go:168] "Request Body" body=""
	I1213 10:35:07.914776  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:07.915128  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:08.414792  353396 type.go:168] "Request Body" body=""
	I1213 10:35:08.414868  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:08.415229  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:08.914906  353396 type.go:168] "Request Body" body=""
	I1213 10:35:08.914987  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:08.915375  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:09.415114  353396 type.go:168] "Request Body" body=""
	I1213 10:35:09.415185  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:09.415534  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:09.415626  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:09.914398  353396 type.go:168] "Request Body" body=""
	I1213 10:35:09.914476  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:09.914888  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:10.414634  353396 type.go:168] "Request Body" body=""
	I1213 10:35:10.414730  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:10.415080  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:10.914849  353396 type.go:168] "Request Body" body=""
	I1213 10:35:10.914926  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:10.915192  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:11.414986  353396 type.go:168] "Request Body" body=""
	I1213 10:35:11.415062  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:11.415419  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:11.915136  353396 type.go:168] "Request Body" body=""
	I1213 10:35:11.915218  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:11.915577  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:11.915629  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:12.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:35:12.414245  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:12.414563  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:12.914542  353396 type.go:168] "Request Body" body=""
	I1213 10:35:12.914628  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:12.914969  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:13.414794  353396 type.go:168] "Request Body" body=""
	I1213 10:35:13.414874  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:13.415199  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:13.914951  353396 type.go:168] "Request Body" body=""
	I1213 10:35:13.915028  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:13.915309  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:14.415142  353396 type.go:168] "Request Body" body=""
	I1213 10:35:14.415220  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:14.415591  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:14.415644  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:14.914209  353396 type.go:168] "Request Body" body=""
	I1213 10:35:14.914291  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:14.914640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:15.414142  353396 type.go:168] "Request Body" body=""
	I1213 10:35:15.414223  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:15.414500  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:15.914207  353396 type.go:168] "Request Body" body=""
	I1213 10:35:15.914282  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:15.914614  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:16.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:35:16.414321  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:16.414682  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:16.914393  353396 type.go:168] "Request Body" body=""
	I1213 10:35:16.914470  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:16.914765  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:16.914810  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:17.414477  353396 type.go:168] "Request Body" body=""
	I1213 10:35:17.414566  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:17.414955  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:17.914879  353396 type.go:168] "Request Body" body=""
	I1213 10:35:17.914965  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:17.915283  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:18.414960  353396 type.go:168] "Request Body" body=""
	I1213 10:35:18.415027  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:18.415288  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:18.915145  353396 type.go:168] "Request Body" body=""
	I1213 10:35:18.915219  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:18.915532  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:18.915589  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:19.414267  353396 type.go:168] "Request Body" body=""
	I1213 10:35:19.414349  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:19.414667  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:19.914171  353396 type.go:168] "Request Body" body=""
	I1213 10:35:19.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:19.914602  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:20.414244  353396 type.go:168] "Request Body" body=""
	I1213 10:35:20.414319  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:20.414679  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:20.914267  353396 type.go:168] "Request Body" body=""
	I1213 10:35:20.914345  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:20.914701  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:21.414378  353396 type.go:168] "Request Body" body=""
	I1213 10:35:21.414447  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:21.414730  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:21.414775  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:21.914224  353396 type.go:168] "Request Body" body=""
	I1213 10:35:21.914303  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:21.914653  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:22.414385  353396 type.go:168] "Request Body" body=""
	I1213 10:35:22.414469  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:22.414833  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:22.914649  353396 type.go:168] "Request Body" body=""
	I1213 10:35:22.914736  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:22.915061  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:23.414837  353396 type.go:168] "Request Body" body=""
	I1213 10:35:23.414918  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:23.415270  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:23.415331  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:23.915146  353396 type.go:168] "Request Body" body=""
	I1213 10:35:23.915249  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:23.915638  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:24.414154  353396 type.go:168] "Request Body" body=""
	I1213 10:35:24.414230  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:24.414552  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:24.914229  353396 type.go:168] "Request Body" body=""
	I1213 10:35:24.914318  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:24.914653  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:25.414252  353396 type.go:168] "Request Body" body=""
	I1213 10:35:25.414330  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:25.414660  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:25.915091  353396 type.go:168] "Request Body" body=""
	I1213 10:35:25.915163  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:25.915423  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:25.915467  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:26.414123  353396 type.go:168] "Request Body" body=""
	I1213 10:35:26.414207  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:26.414558  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:26.914267  353396 type.go:168] "Request Body" body=""
	I1213 10:35:26.914346  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:26.914677  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:27.414161  353396 type.go:168] "Request Body" body=""
	I1213 10:35:27.414230  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:27.414491  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:27.914568  353396 type.go:168] "Request Body" body=""
	I1213 10:35:27.914650  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:27.914976  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:28.414794  353396 type.go:168] "Request Body" body=""
	I1213 10:35:28.414873  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:28.415186  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:28.415239  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:28.914787  353396 type.go:168] "Request Body" body=""
	I1213 10:35:28.914856  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:28.915120  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:29.414926  353396 type.go:168] "Request Body" body=""
	I1213 10:35:29.415009  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:29.415380  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:29.915155  353396 type.go:168] "Request Body" body=""
	I1213 10:35:29.915232  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:29.915572  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:30.414139  353396 type.go:168] "Request Body" body=""
	I1213 10:35:30.414216  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:30.414501  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:30.914198  353396 type.go:168] "Request Body" body=""
	I1213 10:35:30.914283  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:30.914624  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:30.914682  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:31.414209  353396 type.go:168] "Request Body" body=""
	I1213 10:35:31.414294  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:31.414641  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:31.915066  353396 type.go:168] "Request Body" body=""
	I1213 10:35:31.915136  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:31.915397  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:32.415120  353396 type.go:168] "Request Body" body=""
	I1213 10:35:32.415192  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:32.415523  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:32.914353  353396 type.go:168] "Request Body" body=""
	I1213 10:35:32.914437  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:32.914779  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:32.914844  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:33.415110  353396 type.go:168] "Request Body" body=""
	I1213 10:35:33.415191  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:33.415482  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:33.914210  353396 type.go:168] "Request Body" body=""
	I1213 10:35:33.914292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:33.914627  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:34.414260  353396 type.go:168] "Request Body" body=""
	I1213 10:35:34.414342  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:34.414742  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:34.914167  353396 type.go:168] "Request Body" body=""
	I1213 10:35:34.914242  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:34.914556  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:35.414226  353396 type.go:168] "Request Body" body=""
	I1213 10:35:35.414313  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:35.414666  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:35.414752  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:35.914420  353396 type.go:168] "Request Body" body=""
	I1213 10:35:35.914498  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:35.914834  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:36.414154  353396 type.go:168] "Request Body" body=""
	I1213 10:35:36.414226  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:36.414499  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:36.914218  353396 type.go:168] "Request Body" body=""
	I1213 10:35:36.914304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:36.914676  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:37.414405  353396 type.go:168] "Request Body" body=""
	I1213 10:35:37.414482  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:37.414832  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:37.414887  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:37.914713  353396 type.go:168] "Request Body" body=""
	I1213 10:35:37.914786  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:37.915049  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:38.414865  353396 type.go:168] "Request Body" body=""
	I1213 10:35:38.414946  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:38.415313  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:38.915124  353396 type.go:168] "Request Body" body=""
	I1213 10:35:38.915206  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:38.915515  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:39.414199  353396 type.go:168] "Request Body" body=""
	I1213 10:35:39.414277  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:39.414637  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:39.914230  353396 type.go:168] "Request Body" body=""
	I1213 10:35:39.914312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:39.914640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:39.914716  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:40.414236  353396 type.go:168] "Request Body" body=""
	I1213 10:35:40.414320  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:40.414647  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:40.914170  353396 type.go:168] "Request Body" body=""
	I1213 10:35:40.914247  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:40.914571  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:41.414273  353396 type.go:168] "Request Body" body=""
	I1213 10:35:41.414349  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:41.414716  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:41.914438  353396 type.go:168] "Request Body" body=""
	I1213 10:35:41.914515  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:41.914837  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:41.914886  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:42.414105  353396 type.go:168] "Request Body" body=""
	I1213 10:35:42.414188  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:42.414457  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:42.914545  353396 type.go:168] "Request Body" body=""
	I1213 10:35:42.914625  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:42.914994  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:43.414794  353396 type.go:168] "Request Body" body=""
	I1213 10:35:43.414871  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:43.415204  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:43.914954  353396 type.go:168] "Request Body" body=""
	I1213 10:35:43.915028  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:43.915294  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:43.915335  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:44.415170  353396 type.go:168] "Request Body" body=""
	I1213 10:35:44.415252  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:44.415625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:44.914230  353396 type.go:168] "Request Body" body=""
	I1213 10:35:44.914311  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:44.914638  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:45.414189  353396 type.go:168] "Request Body" body=""
	I1213 10:35:45.414273  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:45.414545  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:45.914206  353396 type.go:168] "Request Body" body=""
	I1213 10:35:45.914285  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:45.914623  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:46.414264  353396 type.go:168] "Request Body" body=""
	I1213 10:35:46.414341  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:46.414706  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:46.414761  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:46.914394  353396 type.go:168] "Request Body" body=""
	I1213 10:35:46.914496  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:46.914842  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:47.414246  353396 type.go:168] "Request Body" body=""
	I1213 10:35:47.414321  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:47.414636  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:47.914823  353396 type.go:168] "Request Body" body=""
	I1213 10:35:47.914900  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:47.915205  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:48.414980  353396 type.go:168] "Request Body" body=""
	I1213 10:35:48.415049  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:48.415356  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:48.415416  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:48.915139  353396 type.go:168] "Request Body" body=""
	I1213 10:35:48.915222  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:48.915541  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:49.414295  353396 type.go:168] "Request Body" body=""
	I1213 10:35:49.414372  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:49.414675  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:49.914178  353396 type.go:168] "Request Body" body=""
	I1213 10:35:49.914246  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:49.914565  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:50.414236  353396 type.go:168] "Request Body" body=""
	I1213 10:35:50.414322  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:50.414646  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:50.914236  353396 type.go:168] "Request Body" body=""
	I1213 10:35:50.914312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:50.914633  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:50.914705  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:51.414174  353396 type.go:168] "Request Body" body=""
	I1213 10:35:51.414251  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:51.414515  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:51.914227  353396 type.go:168] "Request Body" body=""
	I1213 10:35:51.914303  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:51.914621  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:52.414226  353396 type.go:168] "Request Body" body=""
	I1213 10:35:52.414312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:52.414660  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:52.914528  353396 type.go:168] "Request Body" body=""
	I1213 10:35:52.914597  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:52.914892  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:52.914936  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:53.414626  353396 type.go:168] "Request Body" body=""
	I1213 10:35:53.414743  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:53.415155  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:53.914985  353396 type.go:168] "Request Body" body=""
	I1213 10:35:53.915060  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:53.915423  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:54.414132  353396 type.go:168] "Request Body" body=""
	I1213 10:35:54.414212  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:54.414538  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:54.914221  353396 type.go:168] "Request Body" body=""
	I1213 10:35:54.914300  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:54.914639  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:55.414361  353396 type.go:168] "Request Body" body=""
	I1213 10:35:55.414442  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:55.414760  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:55.414814  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:55.914153  353396 type.go:168] "Request Body" body=""
	I1213 10:35:55.914231  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:55.914493  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:56.414257  353396 type.go:168] "Request Body" body=""
	I1213 10:35:56.414339  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:56.414657  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:56.914216  353396 type.go:168] "Request Body" body=""
	I1213 10:35:56.914293  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:56.914667  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:57.414176  353396 type.go:168] "Request Body" body=""
	I1213 10:35:57.414254  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:57.414584  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:57.914966  353396 type.go:168] "Request Body" body=""
	I1213 10:35:57.915050  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:57.915391  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:57.915453  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:58.414132  353396 type.go:168] "Request Body" body=""
	I1213 10:35:58.414215  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:58.414528  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:58.914158  353396 type.go:168] "Request Body" body=""
	I1213 10:35:58.914236  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:58.914510  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:59.414124  353396 type.go:168] "Request Body" body=""
	I1213 10:35:59.414208  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:59.414536  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:59.914263  353396 type.go:168] "Request Body" body=""
	I1213 10:35:59.914349  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:59.914758  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:00.421144  353396 type.go:168] "Request Body" body=""
	I1213 10:36:00.421250  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:00.421612  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:00.421665  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:00.914230  353396 type.go:168] "Request Body" body=""
	I1213 10:36:00.914305  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:00.914644  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:01.414215  353396 type.go:168] "Request Body" body=""
	I1213 10:36:01.414292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:01.414622  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:01.914179  353396 type.go:168] "Request Body" body=""
	I1213 10:36:01.914256  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:01.914522  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:02.414207  353396 type.go:168] "Request Body" body=""
	I1213 10:36:02.414283  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:02.414571  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:02.914503  353396 type.go:168] "Request Body" body=""
	I1213 10:36:02.914581  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:02.914941  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:02.915005  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:03.414758  353396 type.go:168] "Request Body" body=""
	I1213 10:36:03.414829  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:03.415178  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:03.914982  353396 type.go:168] "Request Body" body=""
	I1213 10:36:03.915057  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:03.915402  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:04.415064  353396 type.go:168] "Request Body" body=""
	I1213 10:36:04.415144  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:04.415523  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:04.914219  353396 type.go:168] "Request Body" body=""
	I1213 10:36:04.914298  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:04.914617  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:05.414231  353396 type.go:168] "Request Body" body=""
	I1213 10:36:05.414310  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:05.414671  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:05.414749  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:05.914422  353396 type.go:168] "Request Body" body=""
	I1213 10:36:05.914498  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:05.914864  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:06.414177  353396 type.go:168] "Request Body" body=""
	I1213 10:36:06.414262  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:06.414578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:06.914278  353396 type.go:168] "Request Body" body=""
	I1213 10:36:06.914363  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:06.914742  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:07.414300  353396 type.go:168] "Request Body" body=""
	I1213 10:36:07.414382  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:07.414720  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:07.414787  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:07.914791  353396 type.go:168] "Request Body" body=""
	I1213 10:36:07.914860  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:07.915123  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:08.414897  353396 type.go:168] "Request Body" body=""
	I1213 10:36:08.414981  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:08.415336  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:08.915032  353396 type.go:168] "Request Body" body=""
	I1213 10:36:08.915117  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:08.915466  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:09.414191  353396 type.go:168] "Request Body" body=""
	I1213 10:36:09.414260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:09.414540  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:09.914271  353396 type.go:168] "Request Body" body=""
	I1213 10:36:09.914352  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:09.914675  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:09.914752  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:10.414138  353396 type.go:168] "Request Body" body=""
	I1213 10:36:10.414216  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:10.414557  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:10.914195  353396 type.go:168] "Request Body" body=""
	I1213 10:36:10.914266  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:10.914534  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:11.414263  353396 type.go:168] "Request Body" body=""
	I1213 10:36:11.414339  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:11.414753  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:11.914459  353396 type.go:168] "Request Body" body=""
	I1213 10:36:11.914533  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:11.914890  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:11.914948  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:12.414139  353396 type.go:168] "Request Body" body=""
	I1213 10:36:12.414211  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:12.414474  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:12.914342  353396 type.go:168] "Request Body" body=""
	I1213 10:36:12.914427  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:12.914750  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:13.414215  353396 type.go:168] "Request Body" body=""
	I1213 10:36:13.414295  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:13.414650  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:13.914372  353396 type.go:168] "Request Body" body=""
	I1213 10:36:13.914451  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:13.914752  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:14.414251  353396 type.go:168] "Request Body" body=""
	I1213 10:36:14.414328  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:14.414656  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:14.414721  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:14.914256  353396 type.go:168] "Request Body" body=""
	I1213 10:36:14.914328  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:14.914611  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:15.415149  353396 type.go:168] "Request Body" body=""
	I1213 10:36:15.415221  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:15.415540  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:15.914232  353396 type.go:168] "Request Body" body=""
	I1213 10:36:15.914308  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:15.914678  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:16.414245  353396 type.go:168] "Request Body" body=""
	I1213 10:36:16.414325  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:16.414657  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:16.914285  353396 type.go:168] "Request Body" body=""
	I1213 10:36:16.914367  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:16.914649  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:16.914725  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:17.414244  353396 type.go:168] "Request Body" body=""
	I1213 10:36:17.414333  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:17.414644  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:17.914739  353396 type.go:168] "Request Body" body=""
	I1213 10:36:17.914821  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:17.915139  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:18.414875  353396 type.go:168] "Request Body" body=""
	I1213 10:36:18.414955  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:18.415226  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:18.915006  353396 type.go:168] "Request Body" body=""
	I1213 10:36:18.915082  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:18.915415  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:18.915472  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:19.415096  353396 type.go:168] "Request Body" body=""
	I1213 10:36:19.415183  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:19.415488  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:19.914201  353396 type.go:168] "Request Body" body=""
	I1213 10:36:19.914273  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:19.914619  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:20.414338  353396 type.go:168] "Request Body" body=""
	I1213 10:36:20.414409  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:20.414746  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:20.914260  353396 type.go:168] "Request Body" body=""
	I1213 10:36:20.914335  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:20.914704  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:21.414252  353396 type.go:168] "Request Body" body=""
	I1213 10:36:21.414338  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:21.414656  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:21.414724  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:21.914251  353396 type.go:168] "Request Body" body=""
	I1213 10:36:21.914328  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:21.914668  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:22.414268  353396 type.go:168] "Request Body" body=""
	I1213 10:36:22.414350  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:22.414680  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:22.914474  353396 type.go:168] "Request Body" body=""
	I1213 10:36:22.914553  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:22.914836  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:23.414235  353396 type.go:168] "Request Body" body=""
	I1213 10:36:23.414326  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:23.414670  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:23.414743  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:23.914266  353396 type.go:168] "Request Body" body=""
	I1213 10:36:23.914367  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:23.914763  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:24.414152  353396 type.go:168] "Request Body" body=""
	I1213 10:36:24.414223  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:24.414481  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:24.914197  353396 type.go:168] "Request Body" body=""
	I1213 10:36:24.914304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:24.914663  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:25.414263  353396 type.go:168] "Request Body" body=""
	I1213 10:36:25.414339  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:25.414676  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:25.914948  353396 type.go:168] "Request Body" body=""
	I1213 10:36:25.915020  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:25.915277  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:25.915318  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:26.415116  353396 type.go:168] "Request Body" body=""
	I1213 10:36:26.415208  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:26.415550  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:26.914250  353396 type.go:168] "Request Body" body=""
	I1213 10:36:26.914329  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:26.914612  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:27.414291  353396 type.go:168] "Request Body" body=""
	I1213 10:36:27.414364  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:27.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:27.914739  353396 type.go:168] "Request Body" body=""
	I1213 10:36:27.914816  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:27.915095  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:28.414897  353396 type.go:168] "Request Body" body=""
	I1213 10:36:28.414982  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:28.415303  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:28.415358  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:28.915084  353396 type.go:168] "Request Body" body=""
	I1213 10:36:28.915156  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:28.915451  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:29.414204  353396 type.go:168] "Request Body" body=""
	I1213 10:36:29.414283  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:29.414602  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:29.914216  353396 type.go:168] "Request Body" body=""
	I1213 10:36:29.914292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:29.914661  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:30.414927  353396 type.go:168] "Request Body" body=""
	I1213 10:36:30.415000  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:30.415303  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:30.915117  353396 type.go:168] "Request Body" body=""
	I1213 10:36:30.915200  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:30.915511  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:30.915566  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:31.414255  353396 type.go:168] "Request Body" body=""
	I1213 10:36:31.414349  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:31.414739  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:31.914164  353396 type.go:168] "Request Body" body=""
	I1213 10:36:31.914237  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:31.914519  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:32.414229  353396 type.go:168] "Request Body" body=""
	I1213 10:36:32.414304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:32.414647  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:32.914523  353396 type.go:168] "Request Body" body=""
	I1213 10:36:32.914604  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:32.914915  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:33.414159  353396 type.go:168] "Request Body" body=""
	I1213 10:36:33.414232  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:33.414567  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:33.414632  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:33.914300  353396 type.go:168] "Request Body" body=""
	I1213 10:36:33.914382  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:33.914670  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:34.414374  353396 type.go:168] "Request Body" body=""
	I1213 10:36:34.414451  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:34.414727  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:34.914184  353396 type.go:168] "Request Body" body=""
	I1213 10:36:34.914264  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:34.914587  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:35.414286  353396 type.go:168] "Request Body" body=""
	I1213 10:36:35.414359  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:35.414670  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:35.414741  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:35.914405  353396 type.go:168] "Request Body" body=""
	I1213 10:36:35.914489  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:35.914832  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:36.415085  353396 type.go:168] "Request Body" body=""
	I1213 10:36:36.415160  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:36.415449  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:36.914164  353396 type.go:168] "Request Body" body=""
	I1213 10:36:36.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:36.914585  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:37.414308  353396 type.go:168] "Request Body" body=""
	I1213 10:36:37.414384  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:37.414780  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:37.414840  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:37.914758  353396 type.go:168] "Request Body" body=""
	I1213 10:36:37.914831  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:37.915157  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:38.414970  353396 type.go:168] "Request Body" body=""
	I1213 10:36:38.415052  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:38.415405  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:38.915122  353396 type.go:168] "Request Body" body=""
	I1213 10:36:38.915210  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:38.915558  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:39.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:36:39.414237  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:39.414542  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:39.914247  353396 type.go:168] "Request Body" body=""
	I1213 10:36:39.914324  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:39.914669  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:39.914747  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:40.414415  353396 type.go:168] "Request Body" body=""
	I1213 10:36:40.414494  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:40.414850  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:40.915098  353396 type.go:168] "Request Body" body=""
	I1213 10:36:40.915172  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:40.915425  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:41.414124  353396 type.go:168] "Request Body" body=""
	I1213 10:36:41.414207  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:41.414558  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:41.914180  353396 type.go:168] "Request Body" body=""
	I1213 10:36:41.914266  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:41.914604  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:42.415138  353396 type.go:168] "Request Body" body=""
	I1213 10:36:42.415216  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:42.415488  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:42.415535  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:42.914549  353396 type.go:168] "Request Body" body=""
	I1213 10:36:42.914622  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:42.914929  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:43.414232  353396 type.go:168] "Request Body" body=""
	I1213 10:36:43.414317  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:43.414680  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:43.914384  353396 type.go:168] "Request Body" body=""
	I1213 10:36:43.914452  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:43.914730  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:44.414224  353396 type.go:168] "Request Body" body=""
	I1213 10:36:44.414302  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:44.414657  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:44.914395  353396 type.go:168] "Request Body" body=""
	I1213 10:36:44.914480  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:44.914836  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:44.914896  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:45.414191  353396 type.go:168] "Request Body" body=""
	I1213 10:36:45.414264  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:45.414567  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:45.914170  353396 type.go:168] "Request Body" body=""
	I1213 10:36:45.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:45.914607  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:46.414263  353396 type.go:168] "Request Body" body=""
	I1213 10:36:46.414343  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:46.414668  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:46.914163  353396 type.go:168] "Request Body" body=""
	I1213 10:36:46.914242  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:46.914578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:47.414274  353396 type.go:168] "Request Body" body=""
	I1213 10:36:47.414359  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:47.414709  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:47.414762  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:47.914884  353396 type.go:168] "Request Body" body=""
	I1213 10:36:47.914961  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:47.915333  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:48.415033  353396 type.go:168] "Request Body" body=""
	I1213 10:36:48.415102  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:48.415408  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:48.914142  353396 type.go:168] "Request Body" body=""
	I1213 10:36:48.914217  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:48.914551  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:49.414248  353396 type.go:168] "Request Body" body=""
	I1213 10:36:49.414332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:49.414653  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:49.914171  353396 type.go:168] "Request Body" body=""
	I1213 10:36:49.914239  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:49.914490  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:49.914533  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:50.414250  353396 type.go:168] "Request Body" body=""
	I1213 10:36:50.414332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:50.414655  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:50.914247  353396 type.go:168] "Request Body" body=""
	I1213 10:36:50.914325  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:50.914719  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:51.415136  353396 type.go:168] "Request Body" body=""
	I1213 10:36:51.415212  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:51.415495  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:51.914194  353396 type.go:168] "Request Body" body=""
	I1213 10:36:51.914271  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:51.914606  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:51.914663  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:52.414196  353396 type.go:168] "Request Body" body=""
	I1213 10:36:52.414278  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:52.414628  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:52.914521  353396 type.go:168] "Request Body" body=""
	I1213 10:36:52.914591  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:52.914917  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:53.414620  353396 type.go:168] "Request Body" body=""
	I1213 10:36:53.414716  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:53.415008  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:53.914831  353396 type.go:168] "Request Body" body=""
	I1213 10:36:53.914908  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:53.915259  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:53.915316  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:54.415073  353396 type.go:168] "Request Body" body=""
	I1213 10:36:54.415143  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:54.415457  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:54.914176  353396 type.go:168] "Request Body" body=""
	I1213 10:36:54.914260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:54.914603  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:55.414307  353396 type.go:168] "Request Body" body=""
	I1213 10:36:55.414386  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:55.414744  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:55.914154  353396 type.go:168] "Request Body" body=""
	I1213 10:36:55.914224  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:55.914531  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:56.414237  353396 type.go:168] "Request Body" body=""
	I1213 10:36:56.414331  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:56.414644  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:56.414728  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:56.914208  353396 type.go:168] "Request Body" body=""
	I1213 10:36:56.914282  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:56.914593  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:57.414164  353396 type.go:168] "Request Body" body=""
	I1213 10:36:57.414233  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:57.414586  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:57.914740  353396 type.go:168] "Request Body" body=""
	I1213 10:36:57.914819  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:57.915172  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:58.414966  353396 type.go:168] "Request Body" body=""
	I1213 10:36:58.415044  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:58.415365  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:58.415427  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:58.914107  353396 type.go:168] "Request Body" body=""
	I1213 10:36:58.914182  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:58.914459  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:59.414161  353396 type.go:168] "Request Body" body=""
	I1213 10:36:59.414247  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:59.414593  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:59.914255  353396 type.go:168] "Request Body" body=""
	I1213 10:36:59.914339  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:59.914625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:00.414213  353396 type.go:168] "Request Body" body=""
	I1213 10:37:00.414303  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:00.414598  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:00.914236  353396 type.go:168] "Request Body" body=""
	I1213 10:37:00.914308  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:00.914641  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:37:00.914708  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:37:01.414238  353396 type.go:168] "Request Body" body=""
	I1213 10:37:01.414328  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:01.414724  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:01.914174  353396 type.go:168] "Request Body" body=""
	I1213 10:37:01.914261  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:01.914555  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:02.414290  353396 type.go:168] "Request Body" body=""
	I1213 10:37:02.414375  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:02.414752  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:02.914840  353396 type.go:168] "Request Body" body=""
	I1213 10:37:02.914918  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:02.915213  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:37:02.915263  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:37:03.415012  353396 type.go:168] "Request Body" body=""
	I1213 10:37:03.415090  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:03.415417  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:03.914198  353396 type.go:168] "Request Body" body=""
	I1213 10:37:03.914276  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:03.914604  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:04.414267  353396 type.go:168] "Request Body" body=""
	I1213 10:37:04.414350  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:04.414724  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:04.914155  353396 type.go:168] "Request Body" body=""
	I1213 10:37:04.914256  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:04.914593  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:05.414221  353396 type.go:168] "Request Body" body=""
	I1213 10:37:05.414327  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:05.414680  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:37:05.414769  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:37:05.914428  353396 type.go:168] "Request Body" body=""
	I1213 10:37:05.914509  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:05.914816  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:06.414144  353396 type.go:168] "Request Body" body=""
	I1213 10:37:06.414222  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:06.414490  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:06.914508  353396 type.go:168] "Request Body" body=""
	I1213 10:37:06.914592  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:06.914976  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:07.414224  353396 type.go:168] "Request Body" body=""
	I1213 10:37:07.414306  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:07.414615  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:07.914710  353396 type.go:168] "Request Body" body=""
	I1213 10:37:07.914821  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:07.915135  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:37:07.915217  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:37:08.414751  353396 node_ready.go:38] duration metric: took 6m0.000751586s for node "functional-652709" to be "Ready" ...
	I1213 10:37:08.417881  353396 out.go:203] 
	W1213 10:37:08.420786  353396 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 10:37:08.420808  353396 out.go:285] * 
	W1213 10:37:08.422957  353396 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:37:08.425703  353396 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.598797169Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.598872271Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.598967394Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.599038221Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.599099490Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.599162407Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.599221616Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.599281514Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.599349601Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.599433072Z" level=info msg="Connect containerd service"
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.599820046Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.600484496Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.612335546Z" level=info msg="Start subscribing containerd event"
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.612571864Z" level=info msg="Start recovering state"
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.612580390Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.612812277Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.652661888Z" level=info msg="Start event monitor"
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.652716773Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.652727021Z" level=info msg="Start streaming server"
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.652735989Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.652744260Z" level=info msg="runtime interface starting up..."
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.652752794Z" level=info msg="starting plugins..."
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.652765914Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:31:05 functional-652709 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.654760247Z" level=info msg="containerd successfully booted in 0.080960s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:10.301772    8492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:10.302551    8492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:10.304425    8492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:10.304932    8492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:10.306633    8492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +12.907693] overlayfs: idmapped layers are currently not supported
	[Dec13 09:25] overlayfs: idmapped layers are currently not supported
	[ +26.192425] overlayfs: idmapped layers are currently not supported
	[Dec13 09:26] overlayfs: idmapped layers are currently not supported
	[ +25.729788] overlayfs: idmapped layers are currently not supported
	[Dec13 09:27] overlayfs: idmapped layers are currently not supported
	[Dec13 09:28] overlayfs: idmapped layers are currently not supported
	[Dec13 09:31] overlayfs: idmapped layers are currently not supported
	[Dec13 09:32] overlayfs: idmapped layers are currently not supported
	[ +17.979093] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec13 09:43] overlayfs: idmapped layers are currently not supported
	[Dec13 09:45] overlayfs: idmapped layers are currently not supported
	[ +25.885102] overlayfs: idmapped layers are currently not supported
	[Dec13 09:46] overlayfs: idmapped layers are currently not supported
	[ +22.078149] overlayfs: idmapped layers are currently not supported
	[Dec13 09:47] overlayfs: idmapped layers are currently not supported
	[Dec13 09:48] overlayfs: idmapped layers are currently not supported
	[Dec13 09:49] overlayfs: idmapped layers are currently not supported
	[Dec13 09:51] overlayfs: idmapped layers are currently not supported
	[ +17.043564] overlayfs: idmapped layers are currently not supported
	[Dec13 09:52] overlayfs: idmapped layers are currently not supported
	[Dec13 09:53] overlayfs: idmapped layers are currently not supported
	[Dec13 10:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:19] hrtimer: interrupt took 21247146 ns
	
	
	==> kernel <==
	 10:37:10 up  3:19,  0 user,  load average: 0.32, 0.33, 0.78
	Linux functional-652709 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:37:07 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:37:07 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 809.
	Dec 13 10:37:07 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:07 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:07 functional-652709 kubelet[8372]: E1213 10:37:07.957790    8372 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:37:07 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:37:07 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:37:08 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 810.
	Dec 13 10:37:08 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:08 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:08 functional-652709 kubelet[8378]: E1213 10:37:08.719768    8378 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:37:08 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:37:08 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:37:09 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 13 10:37:09 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:09 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:09 functional-652709 kubelet[8399]: E1213 10:37:09.480087    8399 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:37:09 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:37:09 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:37:10 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 13 10:37:10 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:10 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:10 functional-652709 kubelet[8477]: E1213 10:37:10.241022    8477 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:37:10 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:37:10 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709: exit status 2 (373.595571ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-652709" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.77s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-652709 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-652709 get po -A: exit status 1 (57.612113ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-652709 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-652709 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-652709 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-652709
helpers_test.go:244: (dbg) docker inspect functional-652709:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f",
	        "Created": "2025-12-13T10:22:44.366993781Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 347931,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:22:44.437030763Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/hosts",
	        "LogPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f-json.log",
	        "Name": "/functional-652709",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-652709:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-652709",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f",
	                "LowerDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095-init/diff:/var/lib/docker/overlay2/5d0ca86320f2c06765995d644a73af30cd6ddb36f5c1a2a6b1ebb24af53cdabd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/merged",
	                "UpperDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/diff",
	                "WorkDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-652709",
	                "Source": "/var/lib/docker/volumes/functional-652709/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-652709",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-652709",
	                "name.minikube.sigs.k8s.io": "functional-652709",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "52e527b5bd789a02eb7efb651200033ed4929e5fc7545e9df042d3f777cc9782",
	            "SandboxKey": "/var/run/docker/netns/52e527b5bd78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-652709": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:23:08:9e:cb:13",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "344f2b940117dadb28d1ef1328f911c0446307288fdfafebfe59f38e473f79cb",
	                    "EndpointID": "8954f96e5987202be5715e7023384fe862744778b2520bccba28c57814f0980f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-652709",
	                        "0f6101071ca2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-652709 -n functional-652709
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-652709 -n functional-652709: exit status 2 (304.10474ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop           │ -p addons-672850                                                                                                                                        │ addons-672850     │ jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:18 UTC │
	│ addons         │ enable dashboard -p addons-672850                                                                                                                       │ addons-672850     │ jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │ 13 Dec 25 10:18 UTC │
	│ addons         │ disable dashboard -p addons-672850                                                                                                                      │ addons-672850     │ jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │ 13 Dec 25 10:18 UTC │
	│ addons         │ disable gvisor -p addons-672850                                                                                                                         │ addons-672850     │ jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │ 13 Dec 25 10:18 UTC │
	│ delete         │ -p addons-672850                                                                                                                                        │ addons-672850     │ jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │ 13 Dec 25 10:18 UTC │
	│ start          │ -p dockerenv-403574 --driver=docker  --container-runtime=containerd                                                                                     │ dockerenv-403574  │ jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │ 13 Dec 25 10:18 UTC │
	│ docker-env     │ --ssh-host --ssh-add -p dockerenv-403574                                                                                                                │ dockerenv-403574  │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ delete         │ -p dockerenv-403574                                                                                                                                     │ dockerenv-403574  │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ start          │ -p nospam-462625 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-462625 --driver=docker  --container-runtime=containerd                           │ nospam-462625     │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ start          │ nospam-462625 --log_dir /tmp/nospam-462625 start --dry-run                                                                                              │ nospam-462625     │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │                     │
	│ start          │ nospam-462625 --log_dir /tmp/nospam-462625 start --dry-run                                                                                              │ nospam-462625     │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │                     │
	│ start          │ nospam-462625 --log_dir /tmp/nospam-462625 start --dry-run                                                                                              │ nospam-462625     │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │                     │
	│ pause          │ nospam-462625 --log_dir /tmp/nospam-462625 pause                                                                                                        │ nospam-462625     │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ pause          │ nospam-462625 --log_dir /tmp/nospam-462625 pause                                                                                                        │ nospam-462625     │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
	│ update-context │ functional-319494 update-context --alsologtostderr -v=2                                                                                                 │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image          │ functional-319494 image ls --format short --alsologtostderr                                                                                             │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image          │ functional-319494 image ls --format yaml --alsologtostderr                                                                                              │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ ssh            │ functional-319494 ssh pgrep buildkitd                                                                                                                   │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │                     │
	│ image          │ functional-319494 image ls --format json --alsologtostderr                                                                                              │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image          │ functional-319494 image build -t localhost/my-image:functional-319494 testdata/build --alsologtostderr                                                  │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image          │ functional-319494 image ls --format table --alsologtostderr                                                                                             │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image          │ functional-319494 image ls                                                                                                                              │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ delete         │ -p functional-319494                                                                                                                                    │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ start          │ -p functional-652709 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │                     │
	│ start          │ -p functional-652709 --alsologtostderr -v=8                                                                                                             │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:31 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:31:02
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:31:02.672113  353396 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:31:02.672249  353396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:31:02.672258  353396 out.go:374] Setting ErrFile to fd 2...
	I1213 10:31:02.672263  353396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:31:02.672511  353396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 10:31:02.672909  353396 out.go:368] Setting JSON to false
	I1213 10:31:02.673776  353396 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11616,"bootTime":1765610247,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 10:31:02.673896  353396 start.go:143] virtualization:  
	I1213 10:31:02.677410  353396 out.go:179] * [functional-652709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:31:02.681384  353396 notify.go:221] Checking for updates...
	I1213 10:31:02.681459  353396 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:31:02.684444  353396 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:31:02.687336  353396 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:31:02.690317  353396 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 10:31:02.693212  353396 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:31:02.696019  353396 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:31:02.699466  353396 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:31:02.699577  353396 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:31:02.725188  353396 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:31:02.725318  353396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:31:02.796082  353396 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:31:02.785556605 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:31:02.796187  353396 docker.go:319] overlay module found
	I1213 10:31:02.799378  353396 out.go:179] * Using the docker driver based on existing profile
	I1213 10:31:02.802341  353396 start.go:309] selected driver: docker
	I1213 10:31:02.802370  353396 start.go:927] validating driver "docker" against &{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:31:02.802524  353396 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:31:02.802652  353396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:31:02.859333  353396 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:31:02.849982894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:31:02.859762  353396 cni.go:84] Creating CNI manager for ""
	I1213 10:31:02.859824  353396 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:31:02.859884  353396 start.go:353] cluster config:
	{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:31:02.863117  353396 out.go:179] * Starting "functional-652709" primary control-plane node in "functional-652709" cluster
	I1213 10:31:02.865981  353396 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 10:31:02.868957  353396 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:31:02.871941  353396 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:31:02.871997  353396 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 10:31:02.872008  353396 cache.go:65] Caching tarball of preloaded images
	I1213 10:31:02.872055  353396 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:31:02.872104  353396 preload.go:238] Found /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 10:31:02.872129  353396 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 10:31:02.872236  353396 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/config.json ...
	I1213 10:31:02.890218  353396 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:31:02.890243  353396 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:31:02.890259  353396 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:31:02.890291  353396 start.go:360] acquireMachinesLock for functional-652709: {Name:mk6e8c40fbbb5af0bb2468340fd710875030300d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:31:02.890351  353396 start.go:364] duration metric: took 34.691µs to acquireMachinesLock for "functional-652709"
	I1213 10:31:02.890374  353396 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:31:02.890380  353396 fix.go:54] fixHost starting: 
	I1213 10:31:02.890658  353396 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:31:02.911217  353396 fix.go:112] recreateIfNeeded on functional-652709: state=Running err=<nil>
	W1213 10:31:02.911248  353396 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:31:02.914505  353396 out.go:252] * Updating the running docker "functional-652709" container ...
	I1213 10:31:02.914550  353396 machine.go:94] provisionDockerMachine start ...
	I1213 10:31:02.914653  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:02.937238  353396 main.go:143] libmachine: Using SSH client type: native
	I1213 10:31:02.937582  353396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:31:02.937592  353396 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:31:03.091334  353396 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-652709
	
	I1213 10:31:03.091359  353396 ubuntu.go:182] provisioning hostname "functional-652709"
	I1213 10:31:03.091424  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:03.110422  353396 main.go:143] libmachine: Using SSH client type: native
	I1213 10:31:03.110837  353396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:31:03.110855  353396 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-652709 && echo "functional-652709" | sudo tee /etc/hostname
	I1213 10:31:03.277113  353396 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-652709
	
	I1213 10:31:03.277196  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:03.294664  353396 main.go:143] libmachine: Using SSH client type: native
	I1213 10:31:03.295057  353396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:31:03.295079  353396 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-652709' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-652709/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-652709' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:31:03.447182  353396 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:31:03.447207  353396 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 10:31:03.447240  353396 ubuntu.go:190] setting up certificates
	I1213 10:31:03.447256  353396 provision.go:84] configureAuth start
	I1213 10:31:03.447330  353396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-652709
	I1213 10:31:03.465044  353396 provision.go:143] copyHostCerts
	I1213 10:31:03.465100  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 10:31:03.465141  353396 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 10:31:03.465148  353396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 10:31:03.465220  353396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 10:31:03.465329  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 10:31:03.465349  353396 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 10:31:03.465353  353396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 10:31:03.465383  353396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 10:31:03.465436  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 10:31:03.465453  353396 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 10:31:03.465457  353396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 10:31:03.465486  353396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 10:31:03.465541  353396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.functional-652709 san=[127.0.0.1 192.168.49.2 functional-652709 localhost minikube]
	I1213 10:31:03.927648  353396 provision.go:177] copyRemoteCerts
	I1213 10:31:03.927724  353396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:31:03.927763  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:03.947692  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:04.064623  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 10:31:04.064688  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:31:04.082355  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 10:31:04.082418  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:31:04.100866  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 10:31:04.100930  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:31:04.121259  353396 provision.go:87] duration metric: took 673.978127ms to configureAuth
	I1213 10:31:04.121312  353396 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:31:04.121495  353396 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:31:04.121509  353396 machine.go:97] duration metric: took 1.206951102s to provisionDockerMachine
	I1213 10:31:04.121518  353396 start.go:293] postStartSetup for "functional-652709" (driver="docker")
	I1213 10:31:04.121529  353396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:31:04.121586  353396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:31:04.121633  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:04.139400  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:04.246752  353396 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:31:04.250273  353396 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 10:31:04.250297  353396 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 10:31:04.250302  353396 command_runner.go:130] > VERSION_ID="12"
	I1213 10:31:04.250307  353396 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 10:31:04.250312  353396 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 10:31:04.250316  353396 command_runner.go:130] > ID=debian
	I1213 10:31:04.250320  353396 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 10:31:04.250325  353396 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 10:31:04.250331  353396 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 10:31:04.250368  353396 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:31:04.250390  353396 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:31:04.250401  353396 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 10:31:04.250463  353396 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 10:31:04.250545  353396 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 10:31:04.250556  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> /etc/ssl/certs/3089152.pem
	I1213 10:31:04.250633  353396 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts -> hosts in /etc/test/nested/copy/308915
	I1213 10:31:04.250715  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts -> /etc/test/nested/copy/308915/hosts
	I1213 10:31:04.250766  353396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/308915
	I1213 10:31:04.258199  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 10:31:04.275892  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts --> /etc/test/nested/copy/308915/hosts (40 bytes)
	I1213 10:31:04.293256  353396 start.go:296] duration metric: took 171.721845ms for postStartSetup
	I1213 10:31:04.293373  353396 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:31:04.293418  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:04.310428  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:04.412061  353396 command_runner.go:130] > 11%
	I1213 10:31:04.412134  353396 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:31:04.417606  353396 command_runner.go:130] > 174G
	I1213 10:31:04.418241  353396 fix.go:56] duration metric: took 1.527856492s for fixHost
	I1213 10:31:04.418260  353396 start.go:83] releasing machines lock for "functional-652709", held for 1.527895524s
	I1213 10:31:04.418328  353396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-652709
	I1213 10:31:04.443217  353396 ssh_runner.go:195] Run: cat /version.json
	I1213 10:31:04.443268  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:04.443564  353396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:31:04.443617  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:04.481371  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:04.481516  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:04.669844  353396 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 10:31:04.669910  353396 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 10:31:04.670045  353396 ssh_runner.go:195] Run: systemctl --version
	I1213 10:31:04.676239  353396 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 10:31:04.676276  353396 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 10:31:04.676350  353396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 10:31:04.680689  353396 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 10:31:04.680854  353396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:31:04.680918  353396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:31:04.688793  353396 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:31:04.688818  353396 start.go:496] detecting cgroup driver to use...
	I1213 10:31:04.688851  353396 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:31:04.688909  353396 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 10:31:04.704425  353396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:31:04.717662  353396 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:31:04.717728  353396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:31:04.733551  353396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:31:04.746955  353396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:31:04.865557  353396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:31:04.977869  353396 docker.go:234] disabling docker service ...
	I1213 10:31:04.977950  353396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:31:04.992461  353396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:31:05.013428  353396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:31:05.135601  353396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:31:05.282715  353396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:31:05.296047  353396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:31:05.308957  353396 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1213 10:31:05.310188  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:31:05.319385  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:31:05.328561  353396 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:31:05.328627  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:31:05.337573  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:31:05.346847  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:31:05.355976  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:31:05.364985  353396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:31:05.373424  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:31:05.382892  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:31:05.391826  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:31:05.401136  353396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:31:05.407987  353396 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 10:31:05.408928  353396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:31:05.416444  353396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:31:05.526748  353396 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:31:05.655433  353396 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 10:31:05.655515  353396 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 10:31:05.659353  353396 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1213 10:31:05.659378  353396 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 10:31:05.659389  353396 command_runner.go:130] > Device: 0,72	Inode: 1622        Links: 1
	I1213 10:31:05.659396  353396 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:31:05.659402  353396 command_runner.go:130] > Access: 2025-12-13 10:31:05.610211940 +0000
	I1213 10:31:05.659407  353396 command_runner.go:130] > Modify: 2025-12-13 10:31:05.610211940 +0000
	I1213 10:31:05.659412  353396 command_runner.go:130] > Change: 2025-12-13 10:31:05.610211940 +0000
	I1213 10:31:05.659416  353396 command_runner.go:130] >  Birth: -
	I1213 10:31:05.660005  353396 start.go:564] Will wait 60s for crictl version
	I1213 10:31:05.660063  353396 ssh_runner.go:195] Run: which crictl
	I1213 10:31:05.663492  353396 command_runner.go:130] > /usr/local/bin/crictl
	I1213 10:31:05.663579  353396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:31:05.685881  353396 command_runner.go:130] > Version:  0.1.0
	I1213 10:31:05.685946  353396 command_runner.go:130] > RuntimeName:  containerd
	I1213 10:31:05.686097  353396 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1213 10:31:05.686253  353396 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 10:31:05.688463  353396 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 10:31:05.688528  353396 ssh_runner.go:195] Run: containerd --version
	I1213 10:31:05.706883  353396 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 10:31:05.709639  353396 ssh_runner.go:195] Run: containerd --version
	I1213 10:31:05.727187  353396 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 10:31:05.735610  353396 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 10:31:05.738579  353396 cli_runner.go:164] Run: docker network inspect functional-652709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:31:05.753316  353396 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:31:05.757039  353396 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 10:31:05.757213  353396 kubeadm.go:884] updating cluster {Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:31:05.757336  353396 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:31:05.757417  353396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:31:05.778952  353396 command_runner.go:130] > {
	I1213 10:31:05.778976  353396 command_runner.go:130] >   "images":  [
	I1213 10:31:05.778980  353396 command_runner.go:130] >     {
	I1213 10:31:05.778990  353396 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:31:05.778995  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779001  353396 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:31:05.779005  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779009  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779018  353396 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 10:31:05.779024  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779028  353396 command_runner.go:130] >       "size":  "40636774",
	I1213 10:31:05.779032  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779041  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779045  353396 command_runner.go:130] >     },
	I1213 10:31:05.779053  353396 command_runner.go:130] >     {
	I1213 10:31:05.779066  353396 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:31:05.779074  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779080  353396 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:31:05.779087  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779091  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779102  353396 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:31:05.779106  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779110  353396 command_runner.go:130] >       "size":  "8034419",
	I1213 10:31:05.779116  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779120  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779128  353396 command_runner.go:130] >     },
	I1213 10:31:05.779131  353396 command_runner.go:130] >     {
	I1213 10:31:05.779138  353396 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:31:05.779145  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779150  353396 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:31:05.779157  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779163  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779175  353396 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 10:31:05.779181  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779185  353396 command_runner.go:130] >       "size":  "21168808",
	I1213 10:31:05.779190  353396 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:31:05.779195  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779199  353396 command_runner.go:130] >     },
	I1213 10:31:05.779204  353396 command_runner.go:130] >     {
	I1213 10:31:05.779211  353396 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:31:05.779218  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779224  353396 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:31:05.779231  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779235  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779246  353396 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 10:31:05.779252  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779257  353396 command_runner.go:130] >       "size":  "21136588",
	I1213 10:31:05.779267  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.779275  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.779279  353396 command_runner.go:130] >       },
	I1213 10:31:05.779283  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779290  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779299  353396 command_runner.go:130] >     },
	I1213 10:31:05.779303  353396 command_runner.go:130] >     {
	I1213 10:31:05.779314  353396 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:31:05.779321  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779327  353396 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:31:05.779334  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779338  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779350  353396 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 10:31:05.779357  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779361  353396 command_runner.go:130] >       "size":  "24678359",
	I1213 10:31:05.779365  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.779375  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.779384  353396 command_runner.go:130] >       },
	I1213 10:31:05.779388  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779396  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779400  353396 command_runner.go:130] >     },
	I1213 10:31:05.779407  353396 command_runner.go:130] >     {
	I1213 10:31:05.779414  353396 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:31:05.779421  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779428  353396 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:31:05.779435  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779439  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779450  353396 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 10:31:05.779454  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779461  353396 command_runner.go:130] >       "size":  "20661043",
	I1213 10:31:05.779465  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.779473  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.779477  353396 command_runner.go:130] >       },
	I1213 10:31:05.779489  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779497  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779501  353396 command_runner.go:130] >     },
	I1213 10:31:05.779507  353396 command_runner.go:130] >     {
	I1213 10:31:05.779515  353396 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:31:05.779522  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779527  353396 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:31:05.779534  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779538  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779546  353396 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:31:05.779553  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779557  353396 command_runner.go:130] >       "size":  "22429671",
	I1213 10:31:05.779561  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779567  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779571  353396 command_runner.go:130] >     },
	I1213 10:31:05.779578  353396 command_runner.go:130] >     {
	I1213 10:31:05.779586  353396 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:31:05.779593  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779600  353396 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:31:05.779606  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779610  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779622  353396 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 10:31:05.779628  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779633  353396 command_runner.go:130] >       "size":  "15391364",
	I1213 10:31:05.779641  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.779645  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.779648  353396 command_runner.go:130] >       },
	I1213 10:31:05.779654  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779658  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779666  353396 command_runner.go:130] >     },
	I1213 10:31:05.779669  353396 command_runner.go:130] >     {
	I1213 10:31:05.779681  353396 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:31:05.779688  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779698  353396 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:31:05.779704  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779709  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779720  353396 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 10:31:05.779726  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779730  353396 command_runner.go:130] >       "size":  "267939",
	I1213 10:31:05.779735  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.779741  353396 command_runner.go:130] >         "value":  "65535"
	I1213 10:31:05.779744  353396 command_runner.go:130] >       },
	I1213 10:31:05.779753  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779758  353396 command_runner.go:130] >       "pinned":  true
	I1213 10:31:05.779764  353396 command_runner.go:130] >     }
	I1213 10:31:05.779767  353396 command_runner.go:130] >   ]
	I1213 10:31:05.779770  353396 command_runner.go:130] > }
	I1213 10:31:05.781791  353396 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:31:05.781813  353396 containerd.go:534] Images already preloaded, skipping extraction
	I1213 10:31:05.781881  353396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:31:05.805396  353396 command_runner.go:130] > {
	I1213 10:31:05.805420  353396 command_runner.go:130] >   "images":  [
	I1213 10:31:05.805426  353396 command_runner.go:130] >     {
	I1213 10:31:05.805436  353396 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:31:05.805441  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805447  353396 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:31:05.805452  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805456  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805465  353396 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 10:31:05.805471  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805477  353396 command_runner.go:130] >       "size":  "40636774",
	I1213 10:31:05.805485  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.805490  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.805501  353396 command_runner.go:130] >     },
	I1213 10:31:05.805504  353396 command_runner.go:130] >     {
	I1213 10:31:05.805512  353396 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:31:05.805517  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805523  353396 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:31:05.805528  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805543  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805556  353396 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:31:05.805566  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805576  353396 command_runner.go:130] >       "size":  "8034419",
	I1213 10:31:05.805580  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.805590  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.805594  353396 command_runner.go:130] >     },
	I1213 10:31:05.805601  353396 command_runner.go:130] >     {
	I1213 10:31:05.805608  353396 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:31:05.805619  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805625  353396 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:31:05.805630  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805655  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805669  353396 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 10:31:05.805675  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805680  353396 command_runner.go:130] >       "size":  "21168808",
	I1213 10:31:05.805687  353396 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:31:05.805693  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.805697  353396 command_runner.go:130] >     },
	I1213 10:31:05.805701  353396 command_runner.go:130] >     {
	I1213 10:31:05.805707  353396 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:31:05.805715  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805720  353396 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:31:05.805727  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805732  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805743  353396 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 10:31:05.805750  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805754  353396 command_runner.go:130] >       "size":  "21136588",
	I1213 10:31:05.805762  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.805772  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.805778  353396 command_runner.go:130] >       },
	I1213 10:31:05.805783  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.805787  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.805795  353396 command_runner.go:130] >     },
	I1213 10:31:05.805803  353396 command_runner.go:130] >     {
	I1213 10:31:05.805810  353396 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:31:05.805818  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805824  353396 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:31:05.805846  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805855  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805863  353396 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 10:31:05.805867  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805873  353396 command_runner.go:130] >       "size":  "24678359",
	I1213 10:31:05.805877  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.805891  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.805894  353396 command_runner.go:130] >       },
	I1213 10:31:05.805899  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.805906  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.805910  353396 command_runner.go:130] >     },
	I1213 10:31:05.805917  353396 command_runner.go:130] >     {
	I1213 10:31:05.805924  353396 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:31:05.805931  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805938  353396 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:31:05.805941  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805946  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805956  353396 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 10:31:05.805963  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805967  353396 command_runner.go:130] >       "size":  "20661043",
	I1213 10:31:05.805972  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.805979  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.805983  353396 command_runner.go:130] >       },
	I1213 10:31:05.805991  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.805995  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.806002  353396 command_runner.go:130] >     },
	I1213 10:31:05.806005  353396 command_runner.go:130] >     {
	I1213 10:31:05.806012  353396 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:31:05.806021  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.806032  353396 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:31:05.806036  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806040  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.806048  353396 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:31:05.806055  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806059  353396 command_runner.go:130] >       "size":  "22429671",
	I1213 10:31:05.806068  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.806072  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.806078  353396 command_runner.go:130] >     },
	I1213 10:31:05.806082  353396 command_runner.go:130] >     {
	I1213 10:31:05.806089  353396 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:31:05.806096  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.806101  353396 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:31:05.806109  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806113  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.806124  353396 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 10:31:05.806131  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806135  353396 command_runner.go:130] >       "size":  "15391364",
	I1213 10:31:05.806139  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.806147  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.806151  353396 command_runner.go:130] >       },
	I1213 10:31:05.806159  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.806164  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.806171  353396 command_runner.go:130] >     },
	I1213 10:31:05.806174  353396 command_runner.go:130] >     {
	I1213 10:31:05.806180  353396 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:31:05.806186  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.806191  353396 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:31:05.806197  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806202  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.806213  353396 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 10:31:05.806217  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806230  353396 command_runner.go:130] >       "size":  "267939",
	I1213 10:31:05.806238  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.806242  353396 command_runner.go:130] >         "value":  "65535"
	I1213 10:31:05.806251  353396 command_runner.go:130] >       },
	I1213 10:31:05.806255  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.806259  353396 command_runner.go:130] >       "pinned":  true
	I1213 10:31:05.806262  353396 command_runner.go:130] >     }
	I1213 10:31:05.806267  353396 command_runner.go:130] >   ]
	I1213 10:31:05.806271  353396 command_runner.go:130] > }
	I1213 10:31:05.808725  353396 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:31:05.808749  353396 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:31:05.808757  353396 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 10:31:05.808887  353396 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-652709 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:31:05.808967  353396 ssh_runner.go:195] Run: sudo crictl info
	I1213 10:31:05.831572  353396 command_runner.go:130] > {
	I1213 10:31:05.831594  353396 command_runner.go:130] >   "cniconfig": {
	I1213 10:31:05.831601  353396 command_runner.go:130] >     "Networks": [
	I1213 10:31:05.831604  353396 command_runner.go:130] >       {
	I1213 10:31:05.831609  353396 command_runner.go:130] >         "Config": {
	I1213 10:31:05.831614  353396 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1213 10:31:05.831619  353396 command_runner.go:130] >           "Name": "cni-loopback",
	I1213 10:31:05.831623  353396 command_runner.go:130] >           "Plugins": [
	I1213 10:31:05.831627  353396 command_runner.go:130] >             {
	I1213 10:31:05.831631  353396 command_runner.go:130] >               "Network": {
	I1213 10:31:05.831635  353396 command_runner.go:130] >                 "ipam": {},
	I1213 10:31:05.831641  353396 command_runner.go:130] >                 "type": "loopback"
	I1213 10:31:05.831650  353396 command_runner.go:130] >               },
	I1213 10:31:05.831662  353396 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1213 10:31:05.831670  353396 command_runner.go:130] >             }
	I1213 10:31:05.831674  353396 command_runner.go:130] >           ],
	I1213 10:31:05.831684  353396 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1213 10:31:05.831688  353396 command_runner.go:130] >         },
	I1213 10:31:05.831696  353396 command_runner.go:130] >         "IFName": "lo"
	I1213 10:31:05.831703  353396 command_runner.go:130] >       }
	I1213 10:31:05.831707  353396 command_runner.go:130] >     ],
	I1213 10:31:05.831712  353396 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1213 10:31:05.831720  353396 command_runner.go:130] >     "PluginDirs": [
	I1213 10:31:05.831724  353396 command_runner.go:130] >       "/opt/cni/bin"
	I1213 10:31:05.831731  353396 command_runner.go:130] >     ],
	I1213 10:31:05.831736  353396 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1213 10:31:05.831743  353396 command_runner.go:130] >     "Prefix": "eth"
	I1213 10:31:05.831747  353396 command_runner.go:130] >   },
	I1213 10:31:05.831754  353396 command_runner.go:130] >   "config": {
	I1213 10:31:05.831762  353396 command_runner.go:130] >     "cdiSpecDirs": [
	I1213 10:31:05.831765  353396 command_runner.go:130] >       "/etc/cdi",
	I1213 10:31:05.831781  353396 command_runner.go:130] >       "/var/run/cdi"
	I1213 10:31:05.831789  353396 command_runner.go:130] >     ],
	I1213 10:31:05.831793  353396 command_runner.go:130] >     "cni": {
	I1213 10:31:05.831797  353396 command_runner.go:130] >       "binDir": "",
	I1213 10:31:05.831801  353396 command_runner.go:130] >       "binDirs": [
	I1213 10:31:05.831810  353396 command_runner.go:130] >         "/opt/cni/bin"
	I1213 10:31:05.831814  353396 command_runner.go:130] >       ],
	I1213 10:31:05.831818  353396 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1213 10:31:05.831821  353396 command_runner.go:130] >       "confTemplate": "",
	I1213 10:31:05.831825  353396 command_runner.go:130] >       "ipPref": "",
	I1213 10:31:05.831829  353396 command_runner.go:130] >       "maxConfNum": 1,
	I1213 10:31:05.831832  353396 command_runner.go:130] >       "setupSerially": false,
	I1213 10:31:05.831837  353396 command_runner.go:130] >       "useInternalLoopback": false
	I1213 10:31:05.831840  353396 command_runner.go:130] >     },
	I1213 10:31:05.831851  353396 command_runner.go:130] >     "containerd": {
	I1213 10:31:05.831859  353396 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1213 10:31:05.831864  353396 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1213 10:31:05.831869  353396 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1213 10:31:05.831872  353396 command_runner.go:130] >       "runtimes": {
	I1213 10:31:05.831875  353396 command_runner.go:130] >         "runc": {
	I1213 10:31:05.831879  353396 command_runner.go:130] >           "ContainerAnnotations": null,
	I1213 10:31:05.831884  353396 command_runner.go:130] >           "PodAnnotations": null,
	I1213 10:31:05.831891  353396 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1213 10:31:05.831895  353396 command_runner.go:130] >           "cgroupWritable": false,
	I1213 10:31:05.831899  353396 command_runner.go:130] >           "cniConfDir": "",
	I1213 10:31:05.831905  353396 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1213 10:31:05.831910  353396 command_runner.go:130] >           "io_type": "",
	I1213 10:31:05.831919  353396 command_runner.go:130] >           "options": {
	I1213 10:31:05.831924  353396 command_runner.go:130] >             "BinaryName": "",
	I1213 10:31:05.831929  353396 command_runner.go:130] >             "CriuImagePath": "",
	I1213 10:31:05.831936  353396 command_runner.go:130] >             "CriuWorkPath": "",
	I1213 10:31:05.831940  353396 command_runner.go:130] >             "IoGid": 0,
	I1213 10:31:05.831948  353396 command_runner.go:130] >             "IoUid": 0,
	I1213 10:31:05.831953  353396 command_runner.go:130] >             "NoNewKeyring": false,
	I1213 10:31:05.831961  353396 command_runner.go:130] >             "Root": "",
	I1213 10:31:05.831965  353396 command_runner.go:130] >             "ShimCgroup": "",
	I1213 10:31:05.831970  353396 command_runner.go:130] >             "SystemdCgroup": false
	I1213 10:31:05.831992  353396 command_runner.go:130] >           },
	I1213 10:31:05.831998  353396 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1213 10:31:05.832004  353396 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1213 10:31:05.832011  353396 command_runner.go:130] >           "runtimePath": "",
	I1213 10:31:05.832017  353396 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1213 10:31:05.832025  353396 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1213 10:31:05.832030  353396 command_runner.go:130] >           "snapshotter": ""
	I1213 10:31:05.832037  353396 command_runner.go:130] >         }
	I1213 10:31:05.832040  353396 command_runner.go:130] >       }
	I1213 10:31:05.832043  353396 command_runner.go:130] >     },
	I1213 10:31:05.832055  353396 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1213 10:31:05.832065  353396 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1213 10:31:05.832073  353396 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1213 10:31:05.832081  353396 command_runner.go:130] >     "disableApparmor": false,
	I1213 10:31:05.832086  353396 command_runner.go:130] >     "disableHugetlbController": true,
	I1213 10:31:05.832093  353396 command_runner.go:130] >     "disableProcMount": false,
	I1213 10:31:05.832098  353396 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1213 10:31:05.832106  353396 command_runner.go:130] >     "enableCDI": true,
	I1213 10:31:05.832110  353396 command_runner.go:130] >     "enableSelinux": false,
	I1213 10:31:05.832118  353396 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1213 10:31:05.832123  353396 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1213 10:31:05.832131  353396 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1213 10:31:05.832135  353396 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1213 10:31:05.832140  353396 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1213 10:31:05.832144  353396 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1213 10:31:05.832151  353396 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1213 10:31:05.832157  353396 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1213 10:31:05.832165  353396 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1213 10:31:05.832171  353396 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1213 10:31:05.832180  353396 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1213 10:31:05.832185  353396 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1213 10:31:05.832192  353396 command_runner.go:130] >   },
	I1213 10:31:05.832195  353396 command_runner.go:130] >   "features": {
	I1213 10:31:05.832204  353396 command_runner.go:130] >     "supplemental_groups_policy": true
	I1213 10:31:05.832208  353396 command_runner.go:130] >   },
	I1213 10:31:05.832212  353396 command_runner.go:130] >   "golang": "go1.24.9",
	I1213 10:31:05.832222  353396 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 10:31:05.832235  353396 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 10:31:05.832240  353396 command_runner.go:130] >   "runtimeHandlers": [
	I1213 10:31:05.832245  353396 command_runner.go:130] >     {
	I1213 10:31:05.832248  353396 command_runner.go:130] >       "features": {
	I1213 10:31:05.832257  353396 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 10:31:05.832262  353396 command_runner.go:130] >         "user_namespaces": true
	I1213 10:31:05.832268  353396 command_runner.go:130] >       }
	I1213 10:31:05.832276  353396 command_runner.go:130] >     },
	I1213 10:31:05.832283  353396 command_runner.go:130] >     {
	I1213 10:31:05.832287  353396 command_runner.go:130] >       "features": {
	I1213 10:31:05.832295  353396 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 10:31:05.832299  353396 command_runner.go:130] >         "user_namespaces": true
	I1213 10:31:05.832302  353396 command_runner.go:130] >       },
	I1213 10:31:05.832307  353396 command_runner.go:130] >       "name": "runc"
	I1213 10:31:05.832310  353396 command_runner.go:130] >     }
	I1213 10:31:05.832313  353396 command_runner.go:130] >   ],
	I1213 10:31:05.832316  353396 command_runner.go:130] >   "status": {
	I1213 10:31:05.832320  353396 command_runner.go:130] >     "conditions": [
	I1213 10:31:05.832325  353396 command_runner.go:130] >       {
	I1213 10:31:05.832330  353396 command_runner.go:130] >         "message": "",
	I1213 10:31:05.832337  353396 command_runner.go:130] >         "reason": "",
	I1213 10:31:05.832344  353396 command_runner.go:130] >         "status": true,
	I1213 10:31:05.832354  353396 command_runner.go:130] >         "type": "RuntimeReady"
	I1213 10:31:05.832362  353396 command_runner.go:130] >       },
	I1213 10:31:05.832365  353396 command_runner.go:130] >       {
	I1213 10:31:05.832375  353396 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1213 10:31:05.832380  353396 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1213 10:31:05.832383  353396 command_runner.go:130] >         "status": false,
	I1213 10:31:05.832388  353396 command_runner.go:130] >         "type": "NetworkReady"
	I1213 10:31:05.832396  353396 command_runner.go:130] >       },
	I1213 10:31:05.832399  353396 command_runner.go:130] >       {
	I1213 10:31:05.832422  353396 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1213 10:31:05.832434  353396 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1213 10:31:05.832444  353396 command_runner.go:130] >         "status": false,
	I1213 10:31:05.832451  353396 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1213 10:31:05.832454  353396 command_runner.go:130] >       }
	I1213 10:31:05.832457  353396 command_runner.go:130] >     ]
	I1213 10:31:05.832461  353396 command_runner.go:130] >   }
	I1213 10:31:05.832463  353396 command_runner.go:130] > }
	I1213 10:31:05.834983  353396 cni.go:84] Creating CNI manager for ""
	I1213 10:31:05.835008  353396 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:31:05.835032  353396 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:31:05.835055  353396 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-652709 NodeName:functional-652709 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:31:05.835177  353396 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-652709"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:31:05.835253  353396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:31:05.843333  353396 command_runner.go:130] > kubeadm
	I1213 10:31:05.843355  353396 command_runner.go:130] > kubectl
	I1213 10:31:05.843360  353396 command_runner.go:130] > kubelet
	I1213 10:31:05.843375  353396 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:31:05.843451  353396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:31:05.851169  353396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 10:31:05.865230  353396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:31:05.877883  353396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 10:31:05.891827  353396 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:31:05.896023  353396 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 10:31:05.896126  353396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:31:06.037110  353396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:31:06.663693  353396 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709 for IP: 192.168.49.2
	I1213 10:31:06.663826  353396 certs.go:195] generating shared ca certs ...
	I1213 10:31:06.663858  353396 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:31:06.664061  353396 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 10:31:06.664135  353396 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 10:31:06.664169  353396 certs.go:257] generating profile certs ...
	I1213 10:31:06.664331  353396 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.key
	I1213 10:31:06.664442  353396 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key.86e7afd1
	I1213 10:31:06.664517  353396 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key
	I1213 10:31:06.664552  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 10:31:06.664592  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 10:31:06.664634  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 10:31:06.664671  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 10:31:06.664701  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 10:31:06.664745  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 10:31:06.664781  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 10:31:06.664811  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 10:31:06.664893  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 10:31:06.664965  353396 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 10:31:06.664999  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:31:06.665056  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:31:06.665113  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:31:06.665174  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 10:31:06.665258  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 10:31:06.665367  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> /usr/share/ca-certificates/3089152.pem
	I1213 10:31:06.665414  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:06.665453  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem -> /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.666083  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:31:06.686373  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:31:06.706393  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:31:06.727893  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:31:06.748376  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:31:06.769115  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:31:06.788184  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:31:06.807317  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:31:06.826240  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 10:31:06.845063  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:31:06.863130  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 10:31:06.881577  353396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:31:06.894536  353396 ssh_runner.go:195] Run: openssl version
	I1213 10:31:06.900741  353396 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 10:31:06.901231  353396 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.909107  353396 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 10:31:06.916518  353396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.920250  353396 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.920295  353396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.920347  353396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.961321  353396 command_runner.go:130] > 51391683
	I1213 10:31:06.961405  353396 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:31:06.969200  353396 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 10:31:06.976714  353396 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 10:31:06.984537  353396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 10:31:06.988716  353396 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 10:31:06.988763  353396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 10:31:06.988817  353396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 10:31:07.029862  353396 command_runner.go:130] > 3ec20f2e
	I1213 10:31:07.030284  353396 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:31:07.037958  353396 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:07.045451  353396 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:31:07.053144  353396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:07.056994  353396 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:07.057051  353396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:07.057104  353396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:07.097856  353396 command_runner.go:130] > b5213941
	I1213 10:31:07.098292  353396 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:31:07.106039  353396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:31:07.109917  353396 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:31:07.109945  353396 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 10:31:07.109953  353396 command_runner.go:130] > Device: 259,1	Inode: 3399222     Links: 1
	I1213 10:31:07.109960  353396 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:31:07.109966  353396 command_runner.go:130] > Access: 2025-12-13 10:26:59.103845116 +0000
	I1213 10:31:07.109971  353396 command_runner.go:130] > Modify: 2025-12-13 10:22:52.641441584 +0000
	I1213 10:31:07.109977  353396 command_runner.go:130] > Change: 2025-12-13 10:22:52.641441584 +0000
	I1213 10:31:07.109982  353396 command_runner.go:130] >  Birth: 2025-12-13 10:22:52.641441584 +0000
	I1213 10:31:07.110079  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:31:07.151277  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.151699  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:31:07.192420  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.192514  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:31:07.233686  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.233923  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:31:07.275302  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.275760  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:31:07.324799  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.325290  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:31:07.377047  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.377629  353396 kubeadm.go:401] StartCluster: {Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:31:07.377757  353396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 10:31:07.377843  353396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:31:07.405423  353396 cri.go:89] found id: ""
	I1213 10:31:07.405508  353396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:31:07.414529  353396 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 10:31:07.414595  353396 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 10:31:07.414615  353396 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 10:31:07.415690  353396 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:31:07.415743  353396 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:31:07.415805  353396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:31:07.423401  353396 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:31:07.423850  353396 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-652709" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:31:07.423998  353396 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-307042/kubeconfig needs updating (will repair): [kubeconfig missing "functional-652709" cluster setting kubeconfig missing "functional-652709" context setting]
	I1213 10:31:07.424313  353396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:31:07.424829  353396 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:31:07.425032  353396 kapi.go:59] client config for functional-652709: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.key", CAFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:31:07.425626  353396 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 10:31:07.425778  353396 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 10:31:07.425812  353396 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 10:31:07.425854  353396 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 10:31:07.425888  353396 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 10:31:07.425723  353396 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 10:31:07.426245  353396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:31:07.437887  353396 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 10:31:07.437960  353396 kubeadm.go:602] duration metric: took 22.197398ms to restartPrimaryControlPlane
	I1213 10:31:07.437984  353396 kubeadm.go:403] duration metric: took 60.362619ms to StartCluster
	I1213 10:31:07.438027  353396 settings.go:142] acquiring lock: {Name:mk079e9a25ebbc2c8fbae42d4c6ed096a652c00b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:31:07.438107  353396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:31:07.438874  353396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:31:07.439133  353396 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 10:31:07.439572  353396 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:31:07.439649  353396 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:31:07.439895  353396 addons.go:70] Setting storage-provisioner=true in profile "functional-652709"
	I1213 10:31:07.439924  353396 addons.go:239] Setting addon storage-provisioner=true in "functional-652709"
	I1213 10:31:07.440086  353396 host.go:66] Checking if "functional-652709" exists ...
	I1213 10:31:07.439942  353396 addons.go:70] Setting default-storageclass=true in profile "functional-652709"
	I1213 10:31:07.440166  353396 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-652709"
	I1213 10:31:07.440530  353396 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:31:07.440672  353396 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:31:07.445924  353396 out.go:179] * Verifying Kubernetes components...
	I1213 10:31:07.449291  353396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:31:07.477163  353396 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:31:07.477818  353396 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:31:07.477982  353396 kapi.go:59] client config for functional-652709: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.key", CAFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:31:07.478289  353396 addons.go:239] Setting addon default-storageclass=true in "functional-652709"
	I1213 10:31:07.478317  353396 host.go:66] Checking if "functional-652709" exists ...
	I1213 10:31:07.478815  353396 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:31:07.480787  353396 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:07.480804  353396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:31:07.480857  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:07.506052  353396 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:07.506074  353396 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:31:07.506149  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:07.532221  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:07.553427  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:07.654835  353396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:31:07.677297  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:07.691553  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:08.413950  353396 node_ready.go:35] waiting up to 6m0s for node "functional-652709" to be "Ready" ...
	I1213 10:31:08.414025  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:08.414055  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.414088  353396 retry.go:31] will retry after 345.496875ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.414094  353396 type.go:168] "Request Body" body=""
	I1213 10:31:08.414127  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:08.414139  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.414145  353396 retry.go:31] will retry after 223.686843ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.414166  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:08.414498  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:08.639014  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:08.708995  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:08.709048  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.709067  353396 retry.go:31] will retry after 375.63163ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.760277  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:08.818789  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:08.818835  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.818856  353396 retry.go:31] will retry after 406.416897ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.915066  353396 type.go:168] "Request Body" body=""
	I1213 10:31:08.915143  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:08.915484  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:09.084944  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:09.142294  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:09.145823  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.145856  353396 retry.go:31] will retry after 462.162588ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.226047  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:09.284957  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:09.285005  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.285029  353396 retry.go:31] will retry after 590.841892ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.414170  353396 type.go:168] "Request Body" body=""
	I1213 10:31:09.414270  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:09.414569  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:09.609047  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:09.669723  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:09.669808  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.669831  353396 retry.go:31] will retry after 579.936823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.876057  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:09.914654  353396 type.go:168] "Request Body" body=""
	I1213 10:31:09.914781  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:09.915113  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:09.958653  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:09.959319  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.959356  353396 retry.go:31] will retry after 607.747477ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:10.250896  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:10.320327  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:10.320375  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:10.320395  353396 retry.go:31] will retry after 1.522220042s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:10.414670  353396 type.go:168] "Request Body" body=""
	I1213 10:31:10.414776  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:10.415078  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:10.415128  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:10.567453  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:10.637133  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:10.637170  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:10.637192  353396 retry.go:31] will retry after 1.738217883s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:10.914619  353396 type.go:168] "Request Body" body=""
	I1213 10:31:10.914713  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:10.915040  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:11.414837  353396 type.go:168] "Request Body" body=""
	I1213 10:31:11.414916  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:11.415223  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:11.842893  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:11.907661  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:11.907696  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:11.907728  353396 retry.go:31] will retry after 2.533033731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:11.915037  353396 type.go:168] "Request Body" body=""
	I1213 10:31:11.915117  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:11.915423  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:12.376116  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:12.414883  353396 type.go:168] "Request Body" body=""
	I1213 10:31:12.414962  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:12.415244  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:12.415286  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:12.436301  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:12.440043  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:12.440078  353396 retry.go:31] will retry after 2.549851387s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:12.914750  353396 type.go:168] "Request Body" body=""
	I1213 10:31:12.914826  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:12.915091  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:13.414886  353396 type.go:168] "Request Body" body=""
	I1213 10:31:13.414964  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:13.415325  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:13.914980  353396 type.go:168] "Request Body" body=""
	I1213 10:31:13.915058  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:13.915431  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:14.414144  353396 type.go:168] "Request Body" body=""
	I1213 10:31:14.414226  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:14.414516  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:14.441795  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:14.521460  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:14.521500  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:14.521521  353396 retry.go:31] will retry after 3.212514963s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:14.915209  353396 type.go:168] "Request Body" body=""
	I1213 10:31:14.915291  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:14.915586  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:14.915630  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:14.990917  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:15.080462  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:15.084181  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:15.084216  353396 retry.go:31] will retry after 3.733369975s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:15.414758  353396 type.go:168] "Request Body" body=""
	I1213 10:31:15.414836  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:15.415124  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:15.914893  353396 type.go:168] "Request Body" body=""
	I1213 10:31:15.914962  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:15.915239  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:16.415068  353396 type.go:168] "Request Body" body=""
	I1213 10:31:16.415147  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:16.415460  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:16.914139  353396 type.go:168] "Request Body" body=""
	I1213 10:31:16.914218  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:16.914520  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:17.414166  353396 type.go:168] "Request Body" body=""
	I1213 10:31:17.414237  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:17.414497  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:17.414542  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:17.734589  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:17.791638  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:17.795431  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:17.795464  353396 retry.go:31] will retry after 2.280639456s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:17.914828  353396 type.go:168] "Request Body" body=""
	I1213 10:31:17.914907  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:17.915229  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:18.415056  353396 type.go:168] "Request Body" body=""
	I1213 10:31:18.415138  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:18.415477  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:18.817969  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:18.882172  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:18.882215  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:18.882235  353396 retry.go:31] will retry after 4.138686797s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:18.914321  353396 type.go:168] "Request Body" body=""
	I1213 10:31:18.914392  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:18.914663  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:19.414265  353396 type.go:168] "Request Body" body=""
	I1213 10:31:19.414351  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:19.414671  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:19.414743  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:19.914452  353396 type.go:168] "Request Body" body=""
	I1213 10:31:19.914532  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:19.914885  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:20.077334  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:20.142139  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:20.142182  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:20.142203  353396 retry.go:31] will retry after 8.217804099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:20.414481  353396 type.go:168] "Request Body" body=""
	I1213 10:31:20.414554  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:20.414845  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:20.914228  353396 type.go:168] "Request Body" body=""
	I1213 10:31:20.914302  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:20.914590  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:21.414310  353396 type.go:168] "Request Body" body=""
	I1213 10:31:21.414387  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:21.414748  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:21.414804  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:21.914112  353396 type.go:168] "Request Body" body=""
	I1213 10:31:21.914192  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:21.914465  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:22.414190  353396 type.go:168] "Request Body" body=""
	I1213 10:31:22.414276  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:22.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:22.914222  353396 type.go:168] "Request Body" body=""
	I1213 10:31:22.914304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:22.914654  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:23.021940  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:23.082413  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:23.086273  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:23.086307  353396 retry.go:31] will retry after 3.228749017s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:23.414853  353396 type.go:168] "Request Body" body=""
	I1213 10:31:23.414928  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:23.415204  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:23.415248  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:23.915086  353396 type.go:168] "Request Body" body=""
	I1213 10:31:23.915169  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:23.915500  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:24.414244  353396 type.go:168] "Request Body" body=""
	I1213 10:31:24.414323  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:24.414750  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:24.914140  353396 type.go:168] "Request Body" body=""
	I1213 10:31:24.914235  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:24.914512  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:25.414276  353396 type.go:168] "Request Body" body=""
	I1213 10:31:25.414350  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:25.414719  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:25.914418  353396 type.go:168] "Request Body" body=""
	I1213 10:31:25.914503  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:25.914851  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:25.914921  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:26.315317  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:26.370308  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:26.374436  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:26.374468  353396 retry.go:31] will retry after 6.181513775s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:26.414616  353396 type.go:168] "Request Body" body=""
	I1213 10:31:26.414702  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:26.414956  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:26.914223  353396 type.go:168] "Request Body" body=""
	I1213 10:31:26.914299  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:26.914631  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:27.414210  353396 type.go:168] "Request Body" body=""
	I1213 10:31:27.414287  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:27.414626  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:27.914667  353396 type.go:168] "Request Body" body=""
	I1213 10:31:27.914756  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:27.915024  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:27.915076  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:28.360839  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:28.414331  353396 type.go:168] "Request Body" body=""
	I1213 10:31:28.414406  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:28.414626  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:28.418709  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:28.418758  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:28.418778  353396 retry.go:31] will retry after 9.214302946s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:28.914367  353396 type.go:168] "Request Body" body=""
	I1213 10:31:28.914492  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:28.914860  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:29.414102  353396 type.go:168] "Request Body" body=""
	I1213 10:31:29.414175  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:29.414432  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:29.914164  353396 type.go:168] "Request Body" body=""
	I1213 10:31:29.914249  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:29.914544  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:30.414171  353396 type.go:168] "Request Body" body=""
	I1213 10:31:30.414256  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:30.414595  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:30.414647  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:30.914147  353396 type.go:168] "Request Body" body=""
	I1213 10:31:30.914252  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:30.914572  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:31.414262  353396 type.go:168] "Request Body" body=""
	I1213 10:31:31.414347  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:31.414732  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:31.914303  353396 type.go:168] "Request Body" body=""
	I1213 10:31:31.914387  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:31.914757  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:32.415148  353396 type.go:168] "Request Body" body=""
	I1213 10:31:32.415224  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:32.415495  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:32.415554  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:32.557021  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:32.617384  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:32.617431  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:32.617463  353396 retry.go:31] will retry after 16.934984193s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:32.914304  353396 type.go:168] "Request Body" body=""
	I1213 10:31:32.914388  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:32.914742  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:33.414206  353396 type.go:168] "Request Body" body=""
	I1213 10:31:33.414289  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:33.414637  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:33.914139  353396 type.go:168] "Request Body" body=""
	I1213 10:31:33.914219  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:33.914504  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:34.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:31:34.414324  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:34.414665  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:34.914262  353396 type.go:168] "Request Body" body=""
	I1213 10:31:34.914338  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:34.914682  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:34.914754  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:35.414981  353396 type.go:168] "Request Body" body=""
	I1213 10:31:35.415048  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:35.415330  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:35.915144  353396 type.go:168] "Request Body" body=""
	I1213 10:31:35.915224  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:35.915612  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:36.414208  353396 type.go:168] "Request Body" body=""
	I1213 10:31:36.414294  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:36.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:36.914192  353396 type.go:168] "Request Body" body=""
	I1213 10:31:36.914277  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:36.914578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:37.414237  353396 type.go:168] "Request Body" body=""
	I1213 10:31:37.414313  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:37.414629  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:37.414735  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:37.633334  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:37.695165  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:37.698650  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:37.698681  353396 retry.go:31] will retry after 9.333447966s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:37.915161  353396 type.go:168] "Request Body" body=""
	I1213 10:31:37.915240  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:37.915589  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:38.414123  353396 type.go:168] "Request Body" body=""
	I1213 10:31:38.414195  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:38.414520  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:38.914231  353396 type.go:168] "Request Body" body=""
	I1213 10:31:38.914310  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:38.914622  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:39.414370  353396 type.go:168] "Request Body" body=""
	I1213 10:31:39.414450  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:39.414771  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:39.414825  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:39.914151  353396 type.go:168] "Request Body" body=""
	I1213 10:31:39.914247  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:39.914597  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:40.414232  353396 type.go:168] "Request Body" body=""
	I1213 10:31:40.414305  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:40.414590  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:40.914274  353396 type.go:168] "Request Body" body=""
	I1213 10:31:40.914351  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:40.914714  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:41.414140  353396 type.go:168] "Request Body" body=""
	I1213 10:31:41.414213  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:41.414477  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:41.914180  353396 type.go:168] "Request Body" body=""
	I1213 10:31:41.914281  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:41.914609  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:41.914666  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:42.414204  353396 type.go:168] "Request Body" body=""
	I1213 10:31:42.414282  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:42.414600  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:42.914120  353396 type.go:168] "Request Body" body=""
	I1213 10:31:42.914194  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:42.914564  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:43.414289  353396 type.go:168] "Request Body" body=""
	I1213 10:31:43.414375  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:43.414737  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:43.914224  353396 type.go:168] "Request Body" body=""
	I1213 10:31:43.914304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:43.914640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:43.914712  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:44.414209  353396 type.go:168] "Request Body" body=""
	I1213 10:31:44.414295  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:44.414641  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:44.914233  353396 type.go:168] "Request Body" body=""
	I1213 10:31:44.914306  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:44.914657  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:45.414435  353396 type.go:168] "Request Body" body=""
	I1213 10:31:45.414551  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:45.414971  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:45.914734  353396 type.go:168] "Request Body" body=""
	I1213 10:31:45.914804  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:45.915154  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:45.915214  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:46.414939  353396 type.go:168] "Request Body" body=""
	I1213 10:31:46.415012  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:46.415313  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:46.915102  353396 type.go:168] "Request Body" body=""
	I1213 10:31:46.915186  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:46.915495  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:47.032831  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:47.089360  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:47.092850  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:47.092882  353396 retry.go:31] will retry after 14.257705184s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:47.414212  353396 type.go:168] "Request Body" body=""
	I1213 10:31:47.414287  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:47.414544  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:47.914676  353396 type.go:168] "Request Body" body=""
	I1213 10:31:47.914771  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:47.915126  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:48.414973  353396 type.go:168] "Request Body" body=""
	I1213 10:31:48.415048  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:48.415397  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:48.415453  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:48.914935  353396 type.go:168] "Request Body" body=""
	I1213 10:31:48.915016  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:48.915282  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:49.415024  353396 type.go:168] "Request Body" body=""
	I1213 10:31:49.415102  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:49.415400  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:49.552673  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:49.614333  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:49.614392  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:49.614413  353396 retry.go:31] will retry after 23.024485713s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:49.914879  353396 type.go:168] "Request Body" body=""
	I1213 10:31:49.914950  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:49.915276  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:50.415038  353396 type.go:168] "Request Body" body=""
	I1213 10:31:50.415112  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:50.415429  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:50.415489  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:50.914923  353396 type.go:168] "Request Body" body=""
	I1213 10:31:50.915005  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:50.915323  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:51.414987  353396 type.go:168] "Request Body" body=""
	I1213 10:31:51.415064  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:51.415444  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:51.915111  353396 type.go:168] "Request Body" body=""
	I1213 10:31:51.915192  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:51.915480  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:52.414202  353396 type.go:168] "Request Body" body=""
	I1213 10:31:52.414285  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:52.414620  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:52.914489  353396 type.go:168] "Request Body" body=""
	I1213 10:31:52.914562  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:52.914926  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:52.914988  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:53.414747  353396 type.go:168] "Request Body" body=""
	I1213 10:31:53.414820  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:53.415090  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:53.914866  353396 type.go:168] "Request Body" body=""
	I1213 10:31:53.914939  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:53.915273  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:54.415083  353396 type.go:168] "Request Body" body=""
	I1213 10:31:54.415160  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:54.415481  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:54.914141  353396 type.go:168] "Request Body" body=""
	I1213 10:31:54.914222  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:54.914536  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:55.414218  353396 type.go:168] "Request Body" body=""
	I1213 10:31:55.414293  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:55.414637  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:55.414730  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:55.914444  353396 type.go:168] "Request Body" body=""
	I1213 10:31:55.914529  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:55.914897  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:56.414701  353396 type.go:168] "Request Body" body=""
	I1213 10:31:56.414795  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:56.415073  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:56.914860  353396 type.go:168] "Request Body" body=""
	I1213 10:31:56.914937  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:56.915228  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:57.415017  353396 type.go:168] "Request Body" body=""
	I1213 10:31:57.415092  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:57.415406  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:57.415455  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:57.914498  353396 type.go:168] "Request Body" body=""
	I1213 10:31:57.914564  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:57.914847  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:58.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:31:58.414333  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:58.414679  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:58.914256  353396 type.go:168] "Request Body" body=""
	I1213 10:31:58.914332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:58.914624  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:59.414297  353396 type.go:168] "Request Body" body=""
	I1213 10:31:59.414370  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:59.414637  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:59.914203  353396 type.go:168] "Request Body" body=""
	I1213 10:31:59.914281  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:59.914613  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:59.914668  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:00.414938  353396 type.go:168] "Request Body" body=""
	I1213 10:32:00.415045  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:00.415391  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:00.914138  353396 type.go:168] "Request Body" body=""
	I1213 10:32:00.914218  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:00.914514  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:01.350855  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:32:01.414382  353396 type.go:168] "Request Body" body=""
	I1213 10:32:01.414452  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:01.414751  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:01.421471  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:01.421509  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:32:01.421528  353396 retry.go:31] will retry after 32.770422349s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:32:01.914172  353396 type.go:168] "Request Body" body=""
	I1213 10:32:01.914251  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:01.914603  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:02.414231  353396 type.go:168] "Request Body" body=""
	I1213 10:32:02.414337  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:02.414661  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:02.414753  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:02.914852  353396 type.go:168] "Request Body" body=""
	I1213 10:32:02.914942  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:02.915291  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:03.415068  353396 type.go:168] "Request Body" body=""
	I1213 10:32:03.415140  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:03.415560  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:03.914265  353396 type.go:168] "Request Body" body=""
	I1213 10:32:03.914365  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:03.914734  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:04.414487  353396 type.go:168] "Request Body" body=""
	I1213 10:32:04.414564  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:04.414920  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:04.414976  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:04.914751  353396 type.go:168] "Request Body" body=""
	I1213 10:32:04.914822  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:04.915267  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:05.415063  353396 type.go:168] "Request Body" body=""
	I1213 10:32:05.415138  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:05.415446  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:05.914158  353396 type.go:168] "Request Body" body=""
	I1213 10:32:05.914237  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:05.914537  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:06.414151  353396 type.go:168] "Request Body" body=""
	I1213 10:32:06.414241  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:06.414588  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:06.914189  353396 type.go:168] "Request Body" body=""
	I1213 10:32:06.914264  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:06.914626  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:06.914721  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:07.414237  353396 type.go:168] "Request Body" body=""
	I1213 10:32:07.414336  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:07.414675  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:07.914726  353396 type.go:168] "Request Body" body=""
	I1213 10:32:07.914801  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:07.915094  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:08.414945  353396 type.go:168] "Request Body" body=""
	I1213 10:32:08.415038  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:08.415395  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:08.914139  353396 type.go:168] "Request Body" body=""
	I1213 10:32:08.914221  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:08.914527  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:09.414118  353396 type.go:168] "Request Body" body=""
	I1213 10:32:09.414186  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:09.414531  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:09.414607  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:09.914205  353396 type.go:168] "Request Body" body=""
	I1213 10:32:09.914276  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:09.914632  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:10.414215  353396 type.go:168] "Request Body" body=""
	I1213 10:32:10.414292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:10.414629  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:10.914203  353396 type.go:168] "Request Body" body=""
	I1213 10:32:10.914292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:10.914593  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:11.414242  353396 type.go:168] "Request Body" body=""
	I1213 10:32:11.414348  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:11.414703  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:11.414757  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:11.914433  353396 type.go:168] "Request Body" body=""
	I1213 10:32:11.914511  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:11.914889  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:12.414571  353396 type.go:168] "Request Body" body=""
	I1213 10:32:12.414678  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:12.414978  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:12.639532  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:32:12.701723  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:12.701768  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:32:12.701788  353396 retry.go:31] will retry after 24.373252759s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:32:12.915117  353396 type.go:168] "Request Body" body=""
	I1213 10:32:12.915211  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:12.915511  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:13.414252  353396 type.go:168] "Request Body" body=""
	I1213 10:32:13.414325  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:13.414721  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:13.414794  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:13.914428  353396 type.go:168] "Request Body" body=""
	I1213 10:32:13.914518  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:13.914913  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:14.414265  353396 type.go:168] "Request Body" body=""
	I1213 10:32:14.414377  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:14.414786  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:14.914281  353396 type.go:168] "Request Body" body=""
	I1213 10:32:14.914360  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:14.914710  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:15.414252  353396 type.go:168] "Request Body" body=""
	I1213 10:32:15.414344  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:15.414630  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:15.914243  353396 type.go:168] "Request Body" body=""
	I1213 10:32:15.914331  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:15.914660  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:15.914750  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:16.414450  353396 type.go:168] "Request Body" body=""
	I1213 10:32:16.414531  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:16.414846  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:16.914165  353396 type.go:168] "Request Body" body=""
	I1213 10:32:16.914233  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:16.914541  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:17.414213  353396 type.go:168] "Request Body" body=""
	I1213 10:32:17.414341  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:17.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:17.914712  353396 type.go:168] "Request Body" body=""
	I1213 10:32:17.914803  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:17.915126  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:17.915184  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:18.414920  353396 type.go:168] "Request Body" body=""
	I1213 10:32:18.415009  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:18.415286  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:18.915162  353396 type.go:168] "Request Body" body=""
	I1213 10:32:18.915251  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:18.915598  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:19.414275  353396 type.go:168] "Request Body" body=""
	I1213 10:32:19.414357  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:19.414661  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:19.914425  353396 type.go:168] "Request Body" body=""
	I1213 10:32:19.914592  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:19.914937  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:20.414768  353396 type.go:168] "Request Body" body=""
	I1213 10:32:20.414852  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:20.415220  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:20.415278  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:20.915055  353396 type.go:168] "Request Body" body=""
	I1213 10:32:20.915156  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:20.915495  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:21.414184  353396 type.go:168] "Request Body" body=""
	I1213 10:32:21.414260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:21.414555  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:21.914250  353396 type.go:168] "Request Body" body=""
	I1213 10:32:21.914326  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:21.914677  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:22.414287  353396 type.go:168] "Request Body" body=""
	I1213 10:32:22.414370  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:22.414741  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:22.914735  353396 type.go:168] "Request Body" body=""
	I1213 10:32:22.914804  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:22.915060  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:22.915107  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:23.414877  353396 type.go:168] "Request Body" body=""
	I1213 10:32:23.414953  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:23.415252  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:23.915036  353396 type.go:168] "Request Body" body=""
	I1213 10:32:23.915115  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:23.915451  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:24.415135  353396 type.go:168] "Request Body" body=""
	I1213 10:32:24.415211  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:24.415473  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:24.914198  353396 type.go:168] "Request Body" body=""
	I1213 10:32:24.914282  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:24.914640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:25.414436  353396 type.go:168] "Request Body" body=""
	I1213 10:32:25.414514  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:25.414854  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:25.414914  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:25.914152  353396 type.go:168] "Request Body" body=""
	I1213 10:32:25.914219  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:25.914483  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:26.414214  353396 type.go:168] "Request Body" body=""
	I1213 10:32:26.414314  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:26.414636  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:26.914231  353396 type.go:168] "Request Body" body=""
	I1213 10:32:26.914307  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:26.914637  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:27.414336  353396 type.go:168] "Request Body" body=""
	I1213 10:32:27.414402  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:27.414666  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:27.914790  353396 type.go:168] "Request Body" body=""
	I1213 10:32:27.914883  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:27.915207  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:27.915256  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:28.414990  353396 type.go:168] "Request Body" body=""
	I1213 10:32:28.415074  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:28.415436  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:28.915099  353396 type.go:168] "Request Body" body=""
	I1213 10:32:28.915173  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:28.915437  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:29.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:32:29.414250  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:29.414561  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:29.914302  353396 type.go:168] "Request Body" body=""
	I1213 10:32:29.914399  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:29.914733  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:30.414157  353396 type.go:168] "Request Body" body=""
	I1213 10:32:30.414241  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:30.414552  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:30.414604  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:30.914233  353396 type.go:168] "Request Body" body=""
	I1213 10:32:30.914307  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:30.914656  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:31.414263  353396 type.go:168] "Request Body" body=""
	I1213 10:32:31.414357  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:31.414708  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:31.914203  353396 type.go:168] "Request Body" body=""
	I1213 10:32:31.914273  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:31.914531  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:32.414222  353396 type.go:168] "Request Body" body=""
	I1213 10:32:32.414304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:32.414640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:32.414727  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:32.914510  353396 type.go:168] "Request Body" body=""
	I1213 10:32:32.914599  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:32.914973  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:33.414825  353396 type.go:168] "Request Body" body=""
	I1213 10:32:33.414915  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:33.415280  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:33.915101  353396 type.go:168] "Request Body" body=""
	I1213 10:32:33.915178  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:33.915518  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:34.192937  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:32:34.265284  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:34.265320  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:34.265405  353396 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:32:34.414970  353396 type.go:168] "Request Body" body=""
	I1213 10:32:34.415052  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:34.415423  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:34.415491  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:34.914214  353396 type.go:168] "Request Body" body=""
	I1213 10:32:34.914301  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:34.914655  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:35.414268  353396 type.go:168] "Request Body" body=""
	I1213 10:32:35.414356  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:35.414678  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:35.914239  353396 type.go:168] "Request Body" body=""
	I1213 10:32:35.914322  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:35.914704  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:36.414407  353396 type.go:168] "Request Body" body=""
	I1213 10:32:36.414485  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:36.414823  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:36.914200  353396 type.go:168] "Request Body" body=""
	I1213 10:32:36.914292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:36.914625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:36.914719  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:37.076016  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:32:37.141132  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:37.141183  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:37.141286  353396 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:32:37.146231  353396 out.go:179] * Enabled addons: 
	I1213 10:32:37.149102  353396 addons.go:530] duration metric: took 1m29.709445532s for enable addons: enabled=[]
	I1213 10:32:37.414592  353396 type.go:168] "Request Body" body=""
	I1213 10:32:37.414736  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:37.415128  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:37.914163  353396 type.go:168] "Request Body" body=""
	I1213 10:32:37.914246  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:37.914580  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:38.414336  353396 type.go:168] "Request Body" body=""
	I1213 10:32:38.414415  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:38.414780  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:38.914239  353396 type.go:168] "Request Body" body=""
	I1213 10:32:38.914317  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:38.914675  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:38.914752  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:39.414390  353396 type.go:168] "Request Body" body=""
	I1213 10:32:39.414462  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:39.414811  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:39.914220  353396 type.go:168] "Request Body" body=""
	I1213 10:32:39.914296  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:39.914620  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:40.414231  353396 type.go:168] "Request Body" body=""
	I1213 10:32:40.414307  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:40.414622  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:40.914193  353396 type.go:168] "Request Body" body=""
	I1213 10:32:40.914271  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:40.914548  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:41.414251  353396 type.go:168] "Request Body" body=""
	I1213 10:32:41.414348  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:41.414708  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:41.414763  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:41.914229  353396 type.go:168] "Request Body" body=""
	I1213 10:32:41.914327  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:41.914643  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:42.414161  353396 type.go:168] "Request Body" body=""
	I1213 10:32:42.414248  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:42.414516  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:42.914567  353396 type.go:168] "Request Body" body=""
	I1213 10:32:42.914643  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:42.914974  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:43.414788  353396 type.go:168] "Request Body" body=""
	I1213 10:32:43.414863  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:43.415192  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:43.415248  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:43.915667  353396 type.go:168] "Request Body" body=""
	I1213 10:32:43.915743  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:43.916016  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:44.414833  353396 type.go:168] "Request Body" body=""
	I1213 10:32:44.414913  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:44.415264  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:44.915103  353396 type.go:168] "Request Body" body=""
	I1213 10:32:44.915182  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:44.915522  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:45.414185  353396 type.go:168] "Request Body" body=""
	I1213 10:32:45.414262  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:45.414578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:45.914231  353396 type.go:168] "Request Body" body=""
	I1213 10:32:45.914307  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:45.914655  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:45.914730  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:46.414248  353396 type.go:168] "Request Body" body=""
	I1213 10:32:46.414348  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:46.414706  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:46.914404  353396 type.go:168] "Request Body" body=""
	I1213 10:32:46.914482  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:46.914848  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:47.414245  353396 type.go:168] "Request Body" body=""
	I1213 10:32:47.414332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:47.414670  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:47.915115  353396 type.go:168] "Request Body" body=""
	I1213 10:32:47.915188  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:47.915496  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:47.915548  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:48.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:32:48.414231  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:48.414501  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:48.914202  353396 type.go:168] "Request Body" body=""
	I1213 10:32:48.914276  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:48.914656  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:49.414387  353396 type.go:168] "Request Body" body=""
	I1213 10:32:49.414468  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:49.414814  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:49.914540  353396 type.go:168] "Request Body" body=""
	I1213 10:32:49.914615  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:49.914986  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:50.414789  353396 type.go:168] "Request Body" body=""
	I1213 10:32:50.414867  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:50.415215  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:50.415272  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:50.915036  353396 type.go:168] "Request Body" body=""
	I1213 10:32:50.915111  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:50.915455  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:51.414115  353396 type.go:168] "Request Body" body=""
	I1213 10:32:51.414190  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:51.414454  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:51.914146  353396 type.go:168] "Request Body" body=""
	I1213 10:32:51.914227  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:51.914572  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:52.414290  353396 type.go:168] "Request Body" body=""
	I1213 10:32:52.414382  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:52.414734  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:52.914517  353396 type.go:168] "Request Body" body=""
	I1213 10:32:52.914591  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:52.914875  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:52.914926  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:53.414207  353396 type.go:168] "Request Body" body=""
	I1213 10:32:53.414292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:53.414618  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:53.914425  353396 type.go:168] "Request Body" body=""
	I1213 10:32:53.914515  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:53.914900  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:54.414164  353396 type.go:168] "Request Body" body=""
	I1213 10:32:54.414246  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:54.414585  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:54.915092  353396 type.go:168] "Request Body" body=""
	I1213 10:32:54.915167  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:54.915487  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:54.915545  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:55.414202  353396 type.go:168] "Request Body" body=""
	I1213 10:32:55.414280  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:55.414623  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:55.914337  353396 type.go:168] "Request Body" body=""
	I1213 10:32:55.914403  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:55.914665  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:56.415120  353396 type.go:168] "Request Body" body=""
	I1213 10:32:56.415206  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:56.415536  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:56.914238  353396 type.go:168] "Request Body" body=""
	I1213 10:32:56.914316  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:56.914647  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:57.414233  353396 type.go:168] "Request Body" body=""
	I1213 10:32:57.414314  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:57.414566  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:57.414610  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:57.914675  353396 type.go:168] "Request Body" body=""
	I1213 10:32:57.914760  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:57.915078  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:58.414843  353396 type.go:168] "Request Body" body=""
	I1213 10:32:58.414921  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:58.415260  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:58.914928  353396 type.go:168] "Request Body" body=""
	I1213 10:32:58.914994  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:58.915260  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:59.414997  353396 type.go:168] "Request Body" body=""
	I1213 10:32:59.415070  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:59.415409  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:59.415463  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:59.915087  353396 type.go:168] "Request Body" body=""
	I1213 10:32:59.915169  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:59.915509  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:00.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:33:00.414313  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:00.414605  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:00.914240  353396 type.go:168] "Request Body" body=""
	I1213 10:33:00.914313  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:00.914656  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:01.414407  353396 type.go:168] "Request Body" body=""
	I1213 10:33:01.414488  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:01.414812  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:01.914174  353396 type.go:168] "Request Body" body=""
	I1213 10:33:01.914242  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:01.914578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:01.914642  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:02.414226  353396 type.go:168] "Request Body" body=""
	I1213 10:33:02.414314  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:02.414834  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:02.914840  353396 type.go:168] "Request Body" body=""
	I1213 10:33:02.914918  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:02.915280  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:03.415005  353396 type.go:168] "Request Body" body=""
	I1213 10:33:03.415071  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:03.415330  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:03.915080  353396 type.go:168] "Request Body" body=""
	I1213 10:33:03.915153  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:03.915513  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:03.915572  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:04.414115  353396 type.go:168] "Request Body" body=""
	I1213 10:33:04.414198  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:04.414530  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:04.914186  353396 type.go:168] "Request Body" body=""
	I1213 10:33:04.914260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:04.914545  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:05.414244  353396 type.go:168] "Request Body" body=""
	I1213 10:33:05.414331  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:05.414650  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:05.914534  353396 type.go:168] "Request Body" body=""
	I1213 10:33:05.914636  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:05.915200  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:06.414237  353396 type.go:168] "Request Body" body=""
	I1213 10:33:06.414329  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:06.414755  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:06.414814  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:06.914236  353396 type.go:168] "Request Body" body=""
	I1213 10:33:06.914317  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:06.914747  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:07.414280  353396 type.go:168] "Request Body" body=""
	I1213 10:33:07.414359  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:07.414723  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:07.914678  353396 type.go:168] "Request Body" body=""
	I1213 10:33:07.914764  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:07.915020  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:08.414786  353396 type.go:168] "Request Body" body=""
	I1213 10:33:08.414861  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:08.415237  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:08.415311  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:08.914933  353396 type.go:168] "Request Body" body=""
	I1213 10:33:08.915016  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:08.915363  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:09.415090  353396 type.go:168] "Request Body" body=""
	I1213 10:33:09.415163  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:09.415497  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:09.914209  353396 type.go:168] "Request Body" body=""
	I1213 10:33:09.914284  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:09.914628  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:10.414238  353396 type.go:168] "Request Body" body=""
	I1213 10:33:10.414322  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:10.414661  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:10.914394  353396 type.go:168] "Request Body" body=""
	I1213 10:33:10.914479  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:10.914797  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:10.914865  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:11.414499  353396 type.go:168] "Request Body" body=""
	I1213 10:33:11.414573  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:11.414931  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:11.914532  353396 type.go:168] "Request Body" body=""
	I1213 10:33:11.914611  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:11.914966  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:12.414801  353396 type.go:168] "Request Body" body=""
	I1213 10:33:12.414868  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:12.415171  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:12.915004  353396 type.go:168] "Request Body" body=""
	I1213 10:33:12.915081  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:12.915417  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:12.915470  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:13.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:33:13.414242  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:13.414579  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:13.914271  353396 type.go:168] "Request Body" body=""
	I1213 10:33:13.914343  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:13.914614  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:14.414244  353396 type.go:168] "Request Body" body=""
	I1213 10:33:14.414327  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:14.414733  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:14.914296  353396 type.go:168] "Request Body" body=""
	I1213 10:33:14.914374  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:14.914755  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:15.414445  353396 type.go:168] "Request Body" body=""
	I1213 10:33:15.414516  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:15.414826  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:15.414874  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:15.914238  353396 type.go:168] "Request Body" body=""
	I1213 10:33:15.914315  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:15.914667  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:16.414229  353396 type.go:168] "Request Body" body=""
	I1213 10:33:16.414308  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:16.414633  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:16.914167  353396 type.go:168] "Request Body" body=""
	I1213 10:33:16.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:16.914576  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:17.414281  353396 type.go:168] "Request Body" body=""
	I1213 10:33:17.414356  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:17.414750  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:17.914808  353396 type.go:168] "Request Body" body=""
	I1213 10:33:17.914886  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:17.915216  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:17.915272  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:18.414973  353396 type.go:168] "Request Body" body=""
	I1213 10:33:18.415047  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:18.415307  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:18.915151  353396 type.go:168] "Request Body" body=""
	I1213 10:33:18.915226  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:18.915625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:19.414335  353396 type.go:168] "Request Body" body=""
	I1213 10:33:19.414419  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:19.414759  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:19.914166  353396 type.go:168] "Request Body" body=""
	I1213 10:33:19.914245  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:19.914568  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:20.414186  353396 type.go:168] "Request Body" body=""
	I1213 10:33:20.414272  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:20.414597  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:20.414654  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:20.914206  353396 type.go:168] "Request Body" body=""
	I1213 10:33:20.914282  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:20.914621  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:21.414132  353396 type.go:168] "Request Body" body=""
	I1213 10:33:21.414208  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:21.414499  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:21.914197  353396 type.go:168] "Request Body" body=""
	I1213 10:33:21.914272  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:21.914622  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:22.414213  353396 type.go:168] "Request Body" body=""
	I1213 10:33:22.414288  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:22.414631  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:22.414714  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:22.914531  353396 type.go:168] "Request Body" body=""
	I1213 10:33:22.914600  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:22.914881  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:23.414582  353396 type.go:168] "Request Body" body=""
	I1213 10:33:23.414669  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:23.415069  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:23.914895  353396 type.go:168] "Request Body" body=""
	I1213 10:33:23.914973  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:23.915336  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:24.415103  353396 type.go:168] "Request Body" body=""
	I1213 10:33:24.415180  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:24.415512  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:24.415578  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:24.914219  353396 type.go:168] "Request Body" body=""
	I1213 10:33:24.914295  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:24.914635  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:25.414345  353396 type.go:168] "Request Body" body=""
	I1213 10:33:25.414421  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:25.414761  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:25.914311  353396 type.go:168] "Request Body" body=""
	I1213 10:33:25.914420  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:25.914777  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:26.414223  353396 type.go:168] "Request Body" body=""
	I1213 10:33:26.414296  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:26.414589  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:26.914215  353396 type.go:168] "Request Body" body=""
	I1213 10:33:26.914296  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:26.914594  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:26.914643  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:27.414265  353396 type.go:168] "Request Body" body=""
	I1213 10:33:27.414341  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:27.414652  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:27.914800  353396 type.go:168] "Request Body" body=""
	I1213 10:33:27.914879  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:27.915203  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:28.415013  353396 type.go:168] "Request Body" body=""
	I1213 10:33:28.415091  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:28.415415  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:28.914142  353396 type.go:168] "Request Body" body=""
	I1213 10:33:28.914234  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:28.914563  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:29.415186  353396 type.go:168] "Request Body" body=""
	I1213 10:33:29.415270  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:29.415654  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:29.415711  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:29.914406  353396 type.go:168] "Request Body" body=""
	I1213 10:33:29.914489  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:29.914881  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:30.414106  353396 type.go:168] "Request Body" body=""
	I1213 10:33:30.414182  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:30.414441  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:30.914192  353396 type.go:168] "Request Body" body=""
	I1213 10:33:30.914277  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:30.914639  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:31.414415  353396 type.go:168] "Request Body" body=""
	I1213 10:33:31.414504  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:31.414860  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:31.914558  353396 type.go:168] "Request Body" body=""
	I1213 10:33:31.914652  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:31.915115  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:31.915181  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:32.414968  353396 type.go:168] "Request Body" body=""
	I1213 10:33:32.415066  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:32.415412  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:32.914133  353396 type.go:168] "Request Body" body=""
	I1213 10:33:32.914209  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:32.914503  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:33.414182  353396 type.go:168] "Request Body" body=""
	I1213 10:33:33.414252  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:33.414521  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:33.914275  353396 type.go:168] "Request Body" body=""
	I1213 10:33:33.914356  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:33.914658  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:34.414225  353396 type.go:168] "Request Body" body=""
	I1213 10:33:34.414306  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:34.414645  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:34.414731  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:34.915069  353396 type.go:168] "Request Body" body=""
	I1213 10:33:34.915139  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:34.915398  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:35.415186  353396 type.go:168] "Request Body" body=""
	I1213 10:33:35.415276  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:35.415605  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:35.914324  353396 type.go:168] "Request Body" body=""
	I1213 10:33:35.914403  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:35.914770  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:36.414183  353396 type.go:168] "Request Body" body=""
	I1213 10:33:36.414258  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:36.414589  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:36.914281  353396 type.go:168] "Request Body" body=""
	I1213 10:33:36.914356  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:36.914674  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:36.914747  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:37.414252  353396 type.go:168] "Request Body" body=""
	I1213 10:33:37.414329  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:37.414672  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:37.914724  353396 type.go:168] "Request Body" body=""
	I1213 10:33:37.914810  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:37.915057  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:38.414913  353396 type.go:168] "Request Body" body=""
	I1213 10:33:38.414995  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:38.415346  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:38.915031  353396 type.go:168] "Request Body" body=""
	I1213 10:33:38.915152  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:38.915474  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:38.915537  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:39.414149  353396 type.go:168] "Request Body" body=""
	I1213 10:33:39.414223  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:39.414489  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:39.914229  353396 type.go:168] "Request Body" body=""
	I1213 10:33:39.914316  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:39.914708  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:40.414421  353396 type.go:168] "Request Body" body=""
	I1213 10:33:40.414505  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:40.414841  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:40.914176  353396 type.go:168] "Request Body" body=""
	I1213 10:33:40.914251  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:40.914547  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:41.414225  353396 type.go:168] "Request Body" body=""
	I1213 10:33:41.414301  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:41.414638  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:41.414716  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:41.914232  353396 type.go:168] "Request Body" body=""
	I1213 10:33:41.914312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:41.914726  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:42.414413  353396 type.go:168] "Request Body" body=""
	I1213 10:33:42.414502  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:42.414788  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:42.914738  353396 type.go:168] "Request Body" body=""
	I1213 10:33:42.914819  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:42.915151  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:43.414956  353396 type.go:168] "Request Body" body=""
	I1213 10:33:43.415050  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:43.415390  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:43.415447  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:43.914096  353396 type.go:168] "Request Body" body=""
	I1213 10:33:43.914175  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:43.914452  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:44.414189  353396 type.go:168] "Request Body" body=""
	I1213 10:33:44.414299  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:44.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:44.914233  353396 type.go:168] "Request Body" body=""
	I1213 10:33:44.914313  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:44.914681  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:45.414148  353396 type.go:168] "Request Body" body=""
	I1213 10:33:45.414250  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:45.414576  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:45.914408  353396 type.go:168] "Request Body" body=""
	I1213 10:33:45.914483  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:45.914847  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:45.914902  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:46.414598  353396 type.go:168] "Request Body" body=""
	I1213 10:33:46.414675  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:46.415085  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:46.914922  353396 type.go:168] "Request Body" body=""
	I1213 10:33:46.915000  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:46.915300  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:47.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:33:47.414249  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:47.414598  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:47.914753  353396 type.go:168] "Request Body" body=""
	I1213 10:33:47.914829  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:47.915132  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:47.915181  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:48.414845  353396 type.go:168] "Request Body" body=""
	I1213 10:33:48.414950  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:48.415268  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:48.914972  353396 type.go:168] "Request Body" body=""
	I1213 10:33:48.915042  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:48.915396  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:49.415067  353396 type.go:168] "Request Body" body=""
	I1213 10:33:49.415147  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:49.415484  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:49.914172  353396 type.go:168] "Request Body" body=""
	I1213 10:33:49.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:49.914579  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:50.414275  353396 type.go:168] "Request Body" body=""
	I1213 10:33:50.414359  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:50.414679  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:50.414752  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:50.914234  353396 type.go:168] "Request Body" body=""
	I1213 10:33:50.914312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:50.914673  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:51.414220  353396 type.go:168] "Request Body" body=""
	I1213 10:33:51.414286  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:51.414593  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:51.914230  353396 type.go:168] "Request Body" body=""
	I1213 10:33:51.914311  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:51.914660  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:52.414409  353396 type.go:168] "Request Body" body=""
	I1213 10:33:52.414499  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:52.414831  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:52.414892  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:52.914704  353396 type.go:168] "Request Body" body=""
	I1213 10:33:52.914782  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:52.915049  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:53.414824  353396 type.go:168] "Request Body" body=""
	I1213 10:33:53.414900  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:53.415223  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:53.915049  353396 type.go:168] "Request Body" body=""
	I1213 10:33:53.915127  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:53.915475  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:54.415020  353396 type.go:168] "Request Body" body=""
	I1213 10:33:54.415131  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:54.415393  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:54.415434  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:54.914119  353396 type.go:168] "Request Body" body=""
	I1213 10:33:54.914214  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:54.914516  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:55.414234  353396 type.go:168] "Request Body" body=""
	I1213 10:33:55.414310  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:55.414632  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:55.914184  353396 type.go:168] "Request Body" body=""
	I1213 10:33:55.914266  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:55.914529  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:56.414289  353396 type.go:168] "Request Body" body=""
	I1213 10:33:56.414370  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:56.414757  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:56.914479  353396 type.go:168] "Request Body" body=""
	I1213 10:33:56.914560  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:56.914914  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:56.914974  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:57.414182  353396 type.go:168] "Request Body" body=""
	I1213 10:33:57.414256  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:57.414574  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:57.914733  353396 type.go:168] "Request Body" body=""
	I1213 10:33:57.914817  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:57.915173  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:58.414963  353396 type.go:168] "Request Body" body=""
	I1213 10:33:58.415038  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:58.415384  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:58.915098  353396 type.go:168] "Request Body" body=""
	I1213 10:33:58.915166  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:58.915457  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:58.915498  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:59.414217  353396 type.go:168] "Request Body" body=""
	I1213 10:33:59.414293  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:59.414619  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:59.914358  353396 type.go:168] "Request Body" body=""
	I1213 10:33:59.914442  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:59.914849  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:00.414213  353396 type.go:168] "Request Body" body=""
	I1213 10:34:00.414339  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:00.414709  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:00.914229  353396 type.go:168] "Request Body" body=""
	I1213 10:34:00.914306  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:00.914634  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:01.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:34:01.414315  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:01.414624  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:01.414672  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:01.914168  353396 type.go:168] "Request Body" body=""
	I1213 10:34:01.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:01.914585  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:02.414240  353396 type.go:168] "Request Body" body=""
	I1213 10:34:02.414320  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:02.414671  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:02.914495  353396 type.go:168] "Request Body" body=""
	I1213 10:34:02.914572  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:02.914905  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:03.414563  353396 type.go:168] "Request Body" body=""
	I1213 10:34:03.414642  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:03.414937  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:03.414981  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:03.914802  353396 type.go:168] "Request Body" body=""
	I1213 10:34:03.914886  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:03.915200  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:04.415061  353396 type.go:168] "Request Body" body=""
	I1213 10:34:04.415173  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:04.415604  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:04.915045  353396 type.go:168] "Request Body" body=""
	I1213 10:34:04.915117  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:04.915454  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:05.414181  353396 type.go:168] "Request Body" body=""
	I1213 10:34:05.414260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:05.414598  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:05.914312  353396 type.go:168] "Request Body" body=""
	I1213 10:34:05.914397  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:05.914761  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:05.914818  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:06.414172  353396 type.go:168] "Request Body" body=""
	I1213 10:34:06.414246  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:06.414578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:06.914215  353396 type.go:168] "Request Body" body=""
	I1213 10:34:06.914294  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:06.914638  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:07.414373  353396 type.go:168] "Request Body" body=""
	I1213 10:34:07.414449  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:07.414801  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:07.914926  353396 type.go:168] "Request Body" body=""
	I1213 10:34:07.914993  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:07.915307  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:07.915360  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:08.415127  353396 type.go:168] "Request Body" body=""
	I1213 10:34:08.415205  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:08.415596  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:08.914374  353396 type.go:168] "Request Body" body=""
	I1213 10:34:08.914456  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:08.914801  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:09.414148  353396 type.go:168] "Request Body" body=""
	I1213 10:34:09.414219  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:09.414479  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:09.914227  353396 type.go:168] "Request Body" body=""
	I1213 10:34:09.914306  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:09.914661  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:10.414240  353396 type.go:168] "Request Body" body=""
	I1213 10:34:10.414319  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:10.414680  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:10.414778  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:10.918812  353396 type.go:168] "Request Body" body=""
	I1213 10:34:10.918890  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:10.919160  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:11.415030  353396 type.go:168] "Request Body" body=""
	I1213 10:34:11.415107  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:11.415436  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:11.914150  353396 type.go:168] "Request Body" body=""
	I1213 10:34:11.914232  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:11.914571  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:12.415071  353396 type.go:168] "Request Body" body=""
	I1213 10:34:12.415146  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:12.415421  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:12.415479  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:12.914213  353396 type.go:168] "Request Body" body=""
	I1213 10:34:12.914288  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:12.914622  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:13.414338  353396 type.go:168] "Request Body" body=""
	I1213 10:34:13.414421  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:13.414784  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:13.914194  353396 type.go:168] "Request Body" body=""
	I1213 10:34:13.914270  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:13.914538  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:14.414217  353396 type.go:168] "Request Body" body=""
	I1213 10:34:14.414294  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:14.414624  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:14.914210  353396 type.go:168] "Request Body" body=""
	I1213 10:34:14.914290  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:14.914590  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:14.914639  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:15.414121  353396 type.go:168] "Request Body" body=""
	I1213 10:34:15.414260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:15.414569  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:15.914203  353396 type.go:168] "Request Body" body=""
	I1213 10:34:15.914284  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:15.914613  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:16.414225  353396 type.go:168] "Request Body" body=""
	I1213 10:34:16.414308  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:16.414648  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:16.914359  353396 type.go:168] "Request Body" body=""
	I1213 10:34:16.914447  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:16.914753  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:16.914798  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:17.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:34:17.414312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:17.414646  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:17.914569  353396 type.go:168] "Request Body" body=""
	I1213 10:34:17.914646  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:17.914997  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:18.414794  353396 type.go:168] "Request Body" body=""
	I1213 10:34:18.414864  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:18.415130  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:18.914878  353396 type.go:168] "Request Body" body=""
	I1213 10:34:18.914956  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:18.915256  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:18.915309  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:19.415048  353396 type.go:168] "Request Body" body=""
	I1213 10:34:19.415124  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:19.415473  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:19.914155  353396 type.go:168] "Request Body" body=""
	I1213 10:34:19.914239  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:19.914557  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:20.414216  353396 type.go:168] "Request Body" body=""
	I1213 10:34:20.414293  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:20.414595  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:20.914298  353396 type.go:168] "Request Body" body=""
	I1213 10:34:20.914378  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:20.914742  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:21.414175  353396 type.go:168] "Request Body" body=""
	I1213 10:34:21.414247  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:21.414574  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:21.414628  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:21.914278  353396 type.go:168] "Request Body" body=""
	I1213 10:34:21.914361  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:21.914745  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:22.414284  353396 type.go:168] "Request Body" body=""
	I1213 10:34:22.414361  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:22.414747  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:22.914549  353396 type.go:168] "Request Body" body=""
	I1213 10:34:22.914626  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:22.914988  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:23.414779  353396 type.go:168] "Request Body" body=""
	I1213 10:34:23.414855  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:23.415214  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:23.415277  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:23.915088  353396 type.go:168] "Request Body" body=""
	I1213 10:34:23.915170  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:23.915507  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:24.414168  353396 type.go:168] "Request Body" body=""
	I1213 10:34:24.414241  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:24.414497  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:24.914176  353396 type.go:168] "Request Body" body=""
	I1213 10:34:24.914250  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:24.914580  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:25.414317  353396 type.go:168] "Request Body" body=""
	I1213 10:34:25.414397  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:25.414758  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:25.914443  353396 type.go:168] "Request Body" body=""
	I1213 10:34:25.914516  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:25.914878  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:25.914936  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:26.414193  353396 type.go:168] "Request Body" body=""
	I1213 10:34:26.414269  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:26.414575  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:26.914218  353396 type.go:168] "Request Body" body=""
	I1213 10:34:26.914293  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:26.914611  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:27.414157  353396 type.go:168] "Request Body" body=""
	I1213 10:34:27.414224  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:27.414475  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:27.914651  353396 type.go:168] "Request Body" body=""
	I1213 10:34:27.914747  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:27.915082  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:27.915143  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:28.414747  353396 type.go:168] "Request Body" body=""
	I1213 10:34:28.414831  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:28.415166  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:28.914918  353396 type.go:168] "Request Body" body=""
	I1213 10:34:28.914994  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:28.915317  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:29.415099  353396 type.go:168] "Request Body" body=""
	I1213 10:34:29.415182  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:29.415527  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:29.914143  353396 type.go:168] "Request Body" body=""
	I1213 10:34:29.914235  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:29.914632  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:30.414347  353396 type.go:168] "Request Body" body=""
	I1213 10:34:30.414415  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:30.414708  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:30.414755  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:30.914237  353396 type.go:168] "Request Body" body=""
	I1213 10:34:30.914320  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:30.914657  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:31.414414  353396 type.go:168] "Request Body" body=""
	I1213 10:34:31.414503  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:31.414889  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:31.914157  353396 type.go:168] "Request Body" body=""
	I1213 10:34:31.914230  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:31.914496  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:32.414209  353396 type.go:168] "Request Body" body=""
	I1213 10:34:32.414292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:32.414648  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:32.914128  353396 type.go:168] "Request Body" body=""
	I1213 10:34:32.914211  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:32.914560  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:32.914616  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:33.414256  353396 type.go:168] "Request Body" body=""
	I1213 10:34:33.414326  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:33.414617  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:33.914297  353396 type.go:168] "Request Body" body=""
	I1213 10:34:33.914377  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:33.914762  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:34.414238  353396 type.go:168] "Request Body" body=""
	I1213 10:34:34.414315  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:34.414643  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:34.914151  353396 type.go:168] "Request Body" body=""
	I1213 10:34:34.914224  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:34.914486  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:35.414223  353396 type.go:168] "Request Body" body=""
	I1213 10:34:35.414304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:35.414642  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:35.414735  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:35.914235  353396 type.go:168] "Request Body" body=""
	I1213 10:34:35.914320  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:35.914658  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:36.414261  353396 type.go:168] "Request Body" body=""
	I1213 10:34:36.414332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:36.414605  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:36.914211  353396 type.go:168] "Request Body" body=""
	I1213 10:34:36.914285  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:36.914640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:37.414211  353396 type.go:168] "Request Body" body=""
	I1213 10:34:37.414289  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:37.414584  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:37.914675  353396 type.go:168] "Request Body" body=""
	I1213 10:34:37.914757  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:37.915023  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:37.915064  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:38.414903  353396 type.go:168] "Request Body" body=""
	I1213 10:34:38.414986  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:38.415396  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:38.914137  353396 type.go:168] "Request Body" body=""
	I1213 10:34:38.914223  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:38.914580  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:39.414172  353396 type.go:168] "Request Body" body=""
	I1213 10:34:39.414253  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:39.414582  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:39.914286  353396 type.go:168] "Request Body" body=""
	I1213 10:34:39.914363  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:39.914715  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:40.414232  353396 type.go:168] "Request Body" body=""
	I1213 10:34:40.414314  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:40.414677  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:40.414753  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:40.914094  353396 type.go:168] "Request Body" body=""
	I1213 10:34:40.914175  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:40.914491  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:41.414243  353396 type.go:168] "Request Body" body=""
	I1213 10:34:41.414321  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:41.414666  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:41.914412  353396 type.go:168] "Request Body" body=""
	I1213 10:34:41.914495  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:41.914870  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:42.414297  353396 type.go:168] "Request Body" body=""
	I1213 10:34:42.414371  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:42.414633  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:42.914585  353396 type.go:168] "Request Body" body=""
	I1213 10:34:42.914668  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:42.915024  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:42.915079  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:43.414607  353396 type.go:168] "Request Body" body=""
	I1213 10:34:43.414702  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:43.415071  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:43.914792  353396 type.go:168] "Request Body" body=""
	I1213 10:34:43.914869  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:43.915208  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:44.415017  353396 type.go:168] "Request Body" body=""
	I1213 10:34:44.415093  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:44.415470  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:44.915253  353396 type.go:168] "Request Body" body=""
	I1213 10:34:44.915329  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:44.915668  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:44.915722  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:45.414372  353396 type.go:168] "Request Body" body=""
	I1213 10:34:45.414449  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:45.414746  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:45.914236  353396 type.go:168] "Request Body" body=""
	I1213 10:34:45.914316  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:45.914655  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:46.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:34:46.414322  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:46.414658  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:46.915158  353396 type.go:168] "Request Body" body=""
	I1213 10:34:46.915231  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:46.915495  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:47.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:34:47.414242  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:47.414552  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:47.414603  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:47.914533  353396 type.go:168] "Request Body" body=""
	I1213 10:34:47.914615  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:47.914992  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:48.414726  353396 type.go:168] "Request Body" body=""
	I1213 10:34:48.414795  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:48.415059  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:48.914847  353396 type.go:168] "Request Body" body=""
	I1213 10:34:48.914935  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:48.915268  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:49.415068  353396 type.go:168] "Request Body" body=""
	I1213 10:34:49.415159  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:49.415526  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:49.415582  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:49.914165  353396 type.go:168] "Request Body" body=""
	I1213 10:34:49.914239  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:49.914499  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:50.414183  353396 type.go:168] "Request Body" body=""
	I1213 10:34:50.414258  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:50.414554  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:50.914233  353396 type.go:168] "Request Body" body=""
	I1213 10:34:50.914307  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:50.914623  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:51.414141  353396 type.go:168] "Request Body" body=""
	I1213 10:34:51.414231  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:51.414525  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:51.914233  353396 type.go:168] "Request Body" body=""
	I1213 10:34:51.914311  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:51.914675  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:51.914750  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:52.414266  353396 type.go:168] "Request Body" body=""
	I1213 10:34:52.414347  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:52.414711  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:52.914454  353396 type.go:168] "Request Body" body=""
	I1213 10:34:52.914525  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:52.914819  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:53.414527  353396 type.go:168] "Request Body" body=""
	I1213 10:34:53.414603  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:53.414939  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:53.914755  353396 type.go:168] "Request Body" body=""
	I1213 10:34:53.914832  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:53.915171  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:53.915227  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:54.414953  353396 type.go:168] "Request Body" body=""
	I1213 10:34:54.415021  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:54.415337  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:54.915118  353396 type.go:168] "Request Body" body=""
	I1213 10:34:54.915194  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:54.915521  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:55.414227  353396 type.go:168] "Request Body" body=""
	I1213 10:34:55.414310  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:55.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:55.914335  353396 type.go:168] "Request Body" body=""
	I1213 10:34:55.914406  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:55.914763  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:56.414221  353396 type.go:168] "Request Body" body=""
	I1213 10:34:56.414295  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:56.414641  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:56.414726  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:56.914215  353396 type.go:168] "Request Body" body=""
	I1213 10:34:56.914290  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:56.914634  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:57.415117  353396 type.go:168] "Request Body" body=""
	I1213 10:34:57.415188  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:57.415448  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:57.914627  353396 type.go:168] "Request Body" body=""
	I1213 10:34:57.914722  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:57.915055  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:58.414842  353396 type.go:168] "Request Body" body=""
	I1213 10:34:58.414915  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:58.415239  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:58.415298  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:58.915010  353396 type.go:168] "Request Body" body=""
	I1213 10:34:58.915077  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:58.915339  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:59.414106  353396 type.go:168] "Request Body" body=""
	I1213 10:34:59.414182  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:59.414535  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:59.914218  353396 type.go:168] "Request Body" body=""
	I1213 10:34:59.914297  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:59.914630  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:00.414234  353396 type.go:168] "Request Body" body=""
	I1213 10:35:00.414329  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:00.414642  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:00.914218  353396 type.go:168] "Request Body" body=""
	I1213 10:35:00.914294  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:00.914620  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:00.914708  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:01.414284  353396 type.go:168] "Request Body" body=""
	I1213 10:35:01.414392  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:01.414774  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:01.914191  353396 type.go:168] "Request Body" body=""
	I1213 10:35:01.914265  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:01.914561  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:02.414257  353396 type.go:168] "Request Body" body=""
	I1213 10:35:02.414340  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:02.414636  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:02.914575  353396 type.go:168] "Request Body" body=""
	I1213 10:35:02.914650  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:02.914985  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:02.915031  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:03.414733  353396 type.go:168] "Request Body" body=""
	I1213 10:35:03.414804  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:03.415061  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:03.914909  353396 type.go:168] "Request Body" body=""
	I1213 10:35:03.914993  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:03.915318  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:04.415148  353396 type.go:168] "Request Body" body=""
	I1213 10:35:04.415227  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:04.415569  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:04.914256  353396 type.go:168] "Request Body" body=""
	I1213 10:35:04.914332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:04.914597  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:05.414227  353396 type.go:168] "Request Body" body=""
	I1213 10:35:05.414308  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:05.414640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:05.414721  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:05.914245  353396 type.go:168] "Request Body" body=""
	I1213 10:35:05.914320  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:05.914676  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:06.414484  353396 type.go:168] "Request Body" body=""
	I1213 10:35:06.414568  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:06.415045  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:06.914814  353396 type.go:168] "Request Body" body=""
	I1213 10:35:06.914901  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:06.915246  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:07.415065  353396 type.go:168] "Request Body" body=""
	I1213 10:35:07.415153  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:07.415494  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:07.415553  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:07.914641  353396 type.go:168] "Request Body" body=""
	I1213 10:35:07.914776  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:07.915128  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:08.414792  353396 type.go:168] "Request Body" body=""
	I1213 10:35:08.414868  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:08.415229  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:08.914906  353396 type.go:168] "Request Body" body=""
	I1213 10:35:08.914987  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:08.915375  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:09.415114  353396 type.go:168] "Request Body" body=""
	I1213 10:35:09.415185  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:09.415534  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:09.415626  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:09.914398  353396 type.go:168] "Request Body" body=""
	I1213 10:35:09.914476  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:09.914888  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:10.414634  353396 type.go:168] "Request Body" body=""
	I1213 10:35:10.414730  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:10.415080  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:10.914849  353396 type.go:168] "Request Body" body=""
	I1213 10:35:10.914926  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:10.915192  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:11.414986  353396 type.go:168] "Request Body" body=""
	I1213 10:35:11.415062  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:11.415419  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:11.915136  353396 type.go:168] "Request Body" body=""
	I1213 10:35:11.915218  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:11.915577  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:11.915629  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:12.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:35:12.414245  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:12.414563  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:12.914542  353396 type.go:168] "Request Body" body=""
	I1213 10:35:12.914628  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:12.914969  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:13.414794  353396 type.go:168] "Request Body" body=""
	I1213 10:35:13.414874  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:13.415199  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:13.914951  353396 type.go:168] "Request Body" body=""
	I1213 10:35:13.915028  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:13.915309  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:14.415142  353396 type.go:168] "Request Body" body=""
	I1213 10:35:14.415220  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:14.415591  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:14.415644  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:14.914209  353396 type.go:168] "Request Body" body=""
	I1213 10:35:14.914291  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:14.914640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:15.414142  353396 type.go:168] "Request Body" body=""
	I1213 10:35:15.414223  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:15.414500  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:15.914207  353396 type.go:168] "Request Body" body=""
	I1213 10:35:15.914282  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:15.914614  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:16.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:35:16.414321  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:16.414682  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:16.914393  353396 type.go:168] "Request Body" body=""
	I1213 10:35:16.914470  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:16.914765  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:16.914810  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:17.414477  353396 type.go:168] "Request Body" body=""
	I1213 10:35:17.414566  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:17.414955  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:17.914879  353396 type.go:168] "Request Body" body=""
	I1213 10:35:17.914965  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:17.915283  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:18.414960  353396 type.go:168] "Request Body" body=""
	I1213 10:35:18.415027  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:18.415288  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:18.915145  353396 type.go:168] "Request Body" body=""
	I1213 10:35:18.915219  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:18.915532  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:18.915589  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:19.414267  353396 type.go:168] "Request Body" body=""
	I1213 10:35:19.414349  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:19.414667  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:19.914171  353396 type.go:168] "Request Body" body=""
	I1213 10:35:19.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:19.914602  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:20.414244  353396 type.go:168] "Request Body" body=""
	I1213 10:35:20.414319  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:20.414679  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:20.914267  353396 type.go:168] "Request Body" body=""
	I1213 10:35:20.914345  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:20.914701  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:21.414378  353396 type.go:168] "Request Body" body=""
	I1213 10:35:21.414447  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:21.414730  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:21.414775  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:21.914224  353396 type.go:168] "Request Body" body=""
	I1213 10:35:21.914303  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:21.914653  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:22.414385  353396 type.go:168] "Request Body" body=""
	I1213 10:35:22.414469  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:22.414833  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:22.914649  353396 type.go:168] "Request Body" body=""
	I1213 10:35:22.914736  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:22.915061  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:23.414837  353396 type.go:168] "Request Body" body=""
	I1213 10:35:23.414918  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:23.415270  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:23.415331  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:23.915146  353396 type.go:168] "Request Body" body=""
	I1213 10:35:23.915249  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:23.915638  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:24.414154  353396 type.go:168] "Request Body" body=""
	I1213 10:35:24.414230  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:24.414552  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:24.914229  353396 type.go:168] "Request Body" body=""
	I1213 10:35:24.914318  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:24.914653  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:25.414252  353396 type.go:168] "Request Body" body=""
	I1213 10:35:25.414330  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:25.414660  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:25.915091  353396 type.go:168] "Request Body" body=""
	I1213 10:35:25.915163  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:25.915423  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:25.915467  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:26.414123  353396 type.go:168] "Request Body" body=""
	I1213 10:35:26.414207  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:26.414558  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:26.914267  353396 type.go:168] "Request Body" body=""
	I1213 10:35:26.914346  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:26.914677  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:27.414161  353396 type.go:168] "Request Body" body=""
	I1213 10:35:27.414230  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:27.414491  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:27.914568  353396 type.go:168] "Request Body" body=""
	I1213 10:35:27.914650  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:27.914976  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:28.414794  353396 type.go:168] "Request Body" body=""
	I1213 10:35:28.414873  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:28.415186  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:28.415239  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:28.914787  353396 type.go:168] "Request Body" body=""
	I1213 10:35:28.914856  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:28.915120  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:29.414926  353396 type.go:168] "Request Body" body=""
	I1213 10:35:29.415009  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:29.415380  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:29.915155  353396 type.go:168] "Request Body" body=""
	I1213 10:35:29.915232  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:29.915572  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:30.414139  353396 type.go:168] "Request Body" body=""
	I1213 10:35:30.414216  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:30.414501  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:30.914198  353396 type.go:168] "Request Body" body=""
	I1213 10:35:30.914283  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:30.914624  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:30.914682  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:31.414209  353396 type.go:168] "Request Body" body=""
	I1213 10:35:31.414294  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:31.414641  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:31.915066  353396 type.go:168] "Request Body" body=""
	I1213 10:35:31.915136  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:31.915397  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:32.415120  353396 type.go:168] "Request Body" body=""
	I1213 10:35:32.415192  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:32.415523  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:32.914353  353396 type.go:168] "Request Body" body=""
	I1213 10:35:32.914437  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:32.914779  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:32.914844  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:33.415110  353396 type.go:168] "Request Body" body=""
	I1213 10:35:33.415191  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:33.415482  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:33.914210  353396 type.go:168] "Request Body" body=""
	I1213 10:35:33.914292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:33.914627  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:34.414260  353396 type.go:168] "Request Body" body=""
	I1213 10:35:34.414342  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:34.414742  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:34.914167  353396 type.go:168] "Request Body" body=""
	I1213 10:35:34.914242  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:34.914556  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:35.414226  353396 type.go:168] "Request Body" body=""
	I1213 10:35:35.414313  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:35.414666  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:35.414752  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:35.914420  353396 type.go:168] "Request Body" body=""
	I1213 10:35:35.914498  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:35.914834  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:36.414154  353396 type.go:168] "Request Body" body=""
	I1213 10:35:36.414226  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:36.414499  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:36.914218  353396 type.go:168] "Request Body" body=""
	I1213 10:35:36.914304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:36.914676  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:37.414405  353396 type.go:168] "Request Body" body=""
	I1213 10:35:37.414482  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:37.414832  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:37.414887  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:37.914713  353396 type.go:168] "Request Body" body=""
	I1213 10:35:37.914786  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:37.915049  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:38.414865  353396 type.go:168] "Request Body" body=""
	I1213 10:35:38.414946  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:38.415313  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:38.915124  353396 type.go:168] "Request Body" body=""
	I1213 10:35:38.915206  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:38.915515  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:39.414199  353396 type.go:168] "Request Body" body=""
	I1213 10:35:39.414277  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:39.414637  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:39.914230  353396 type.go:168] "Request Body" body=""
	I1213 10:35:39.914312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:39.914640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:39.914716  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:40.414236  353396 type.go:168] "Request Body" body=""
	I1213 10:35:40.414320  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:40.414647  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:40.914170  353396 type.go:168] "Request Body" body=""
	I1213 10:35:40.914247  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:40.914571  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:41.414273  353396 type.go:168] "Request Body" body=""
	I1213 10:35:41.414349  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:41.414716  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:41.914438  353396 type.go:168] "Request Body" body=""
	I1213 10:35:41.914515  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:41.914837  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:41.914886  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:42.414105  353396 type.go:168] "Request Body" body=""
	I1213 10:35:42.414188  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:42.414457  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:42.914545  353396 type.go:168] "Request Body" body=""
	I1213 10:35:42.914625  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:42.914994  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:43.414794  353396 type.go:168] "Request Body" body=""
	I1213 10:35:43.414871  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:43.415204  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:43.914954  353396 type.go:168] "Request Body" body=""
	I1213 10:35:43.915028  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:43.915294  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:43.915335  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:44.415170  353396 type.go:168] "Request Body" body=""
	I1213 10:35:44.415252  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:44.415625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:44.914230  353396 type.go:168] "Request Body" body=""
	I1213 10:35:44.914311  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:44.914638  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:45.414189  353396 type.go:168] "Request Body" body=""
	I1213 10:35:45.414273  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:45.414545  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:45.914206  353396 type.go:168] "Request Body" body=""
	I1213 10:35:45.914285  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:45.914623  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:46.414264  353396 type.go:168] "Request Body" body=""
	I1213 10:35:46.414341  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:46.414706  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:46.414761  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:46.914394  353396 type.go:168] "Request Body" body=""
	I1213 10:35:46.914496  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:46.914842  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:47.414246  353396 type.go:168] "Request Body" body=""
	I1213 10:35:47.414321  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:47.414636  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:47.914823  353396 type.go:168] "Request Body" body=""
	I1213 10:35:47.914900  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:47.915205  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:48.414980  353396 type.go:168] "Request Body" body=""
	I1213 10:35:48.415049  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:48.415356  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:48.415416  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:48.915139  353396 type.go:168] "Request Body" body=""
	I1213 10:35:48.915222  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:48.915541  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:49.414295  353396 type.go:168] "Request Body" body=""
	I1213 10:35:49.414372  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:49.414675  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:49.914178  353396 type.go:168] "Request Body" body=""
	I1213 10:35:49.914246  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:49.914565  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:50.414236  353396 type.go:168] "Request Body" body=""
	I1213 10:35:50.414322  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:50.414646  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:50.914236  353396 type.go:168] "Request Body" body=""
	I1213 10:35:50.914312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:50.914633  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:50.914705  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:51.414174  353396 type.go:168] "Request Body" body=""
	I1213 10:35:51.414251  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:51.414515  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:51.914227  353396 type.go:168] "Request Body" body=""
	I1213 10:35:51.914303  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:51.914621  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:52.414226  353396 type.go:168] "Request Body" body=""
	I1213 10:35:52.414312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:52.414660  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:52.914528  353396 type.go:168] "Request Body" body=""
	I1213 10:35:52.914597  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:52.914892  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:52.914936  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:53.414626  353396 type.go:168] "Request Body" body=""
	I1213 10:35:53.414743  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:53.415155  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:53.914985  353396 type.go:168] "Request Body" body=""
	I1213 10:35:53.915060  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:53.915423  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:54.414132  353396 type.go:168] "Request Body" body=""
	I1213 10:35:54.414212  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:54.414538  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:54.914221  353396 type.go:168] "Request Body" body=""
	I1213 10:35:54.914300  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:54.914639  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:55.414361  353396 type.go:168] "Request Body" body=""
	I1213 10:35:55.414442  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:55.414760  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:55.414814  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:55.914153  353396 type.go:168] "Request Body" body=""
	I1213 10:35:55.914231  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:55.914493  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:56.414257  353396 type.go:168] "Request Body" body=""
	I1213 10:35:56.414339  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:56.414657  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:56.914216  353396 type.go:168] "Request Body" body=""
	I1213 10:35:56.914293  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:56.914667  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:57.414176  353396 type.go:168] "Request Body" body=""
	I1213 10:35:57.414254  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:57.414584  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:57.914966  353396 type.go:168] "Request Body" body=""
	I1213 10:35:57.915050  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:57.915391  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:57.915453  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:58.414132  353396 type.go:168] "Request Body" body=""
	I1213 10:35:58.414215  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:58.414528  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:58.914158  353396 type.go:168] "Request Body" body=""
	I1213 10:35:58.914236  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:58.914510  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:59.414124  353396 type.go:168] "Request Body" body=""
	I1213 10:35:59.414208  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:59.414536  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:59.914263  353396 type.go:168] "Request Body" body=""
	I1213 10:35:59.914349  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:59.914758  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:00.421144  353396 type.go:168] "Request Body" body=""
	I1213 10:36:00.421250  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:00.421612  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:00.421665  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:00.914230  353396 type.go:168] "Request Body" body=""
	I1213 10:36:00.914305  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:00.914644  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:01.414215  353396 type.go:168] "Request Body" body=""
	I1213 10:36:01.414292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:01.414622  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:01.914179  353396 type.go:168] "Request Body" body=""
	I1213 10:36:01.914256  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:01.914522  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:02.414207  353396 type.go:168] "Request Body" body=""
	I1213 10:36:02.414283  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:02.414571  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:02.914503  353396 type.go:168] "Request Body" body=""
	I1213 10:36:02.914581  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:02.914941  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:02.915005  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:03.414758  353396 type.go:168] "Request Body" body=""
	I1213 10:36:03.414829  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:03.415178  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:03.914982  353396 type.go:168] "Request Body" body=""
	I1213 10:36:03.915057  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:03.915402  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:04.415064  353396 type.go:168] "Request Body" body=""
	I1213 10:36:04.415144  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:04.415523  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:04.914219  353396 type.go:168] "Request Body" body=""
	I1213 10:36:04.914298  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:04.914617  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:05.414231  353396 type.go:168] "Request Body" body=""
	I1213 10:36:05.414310  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:05.414671  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:05.414749  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:05.914422  353396 type.go:168] "Request Body" body=""
	I1213 10:36:05.914498  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:05.914864  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:06.414177  353396 type.go:168] "Request Body" body=""
	I1213 10:36:06.414262  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:06.414578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:06.914278  353396 type.go:168] "Request Body" body=""
	I1213 10:36:06.914363  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:06.914742  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:07.414300  353396 type.go:168] "Request Body" body=""
	I1213 10:36:07.414382  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:07.414720  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:07.414787  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:07.914791  353396 type.go:168] "Request Body" body=""
	I1213 10:36:07.914860  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:07.915123  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:08.414897  353396 type.go:168] "Request Body" body=""
	I1213 10:36:08.414981  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:08.415336  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:08.915032  353396 type.go:168] "Request Body" body=""
	I1213 10:36:08.915117  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:08.915466  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:09.414191  353396 type.go:168] "Request Body" body=""
	I1213 10:36:09.414260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:09.414540  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:09.914271  353396 type.go:168] "Request Body" body=""
	I1213 10:36:09.914352  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:09.914675  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:09.914752  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:10.414138  353396 type.go:168] "Request Body" body=""
	I1213 10:36:10.414216  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:10.414557  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:10.914195  353396 type.go:168] "Request Body" body=""
	I1213 10:36:10.914266  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:10.914534  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:11.414263  353396 type.go:168] "Request Body" body=""
	I1213 10:36:11.414339  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:11.414753  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:11.914459  353396 type.go:168] "Request Body" body=""
	I1213 10:36:11.914533  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:11.914890  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:11.914948  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:12.414139  353396 type.go:168] "Request Body" body=""
	I1213 10:36:12.414211  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:12.414474  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:12.914342  353396 type.go:168] "Request Body" body=""
	I1213 10:36:12.914427  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:12.914750  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:13.414215  353396 type.go:168] "Request Body" body=""
	I1213 10:36:13.414295  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:13.414650  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:13.914372  353396 type.go:168] "Request Body" body=""
	I1213 10:36:13.914451  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:13.914752  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:14.414251  353396 type.go:168] "Request Body" body=""
	I1213 10:36:14.414328  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:14.414656  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:14.414721  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:14.914256  353396 type.go:168] "Request Body" body=""
	I1213 10:36:14.914328  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:14.914611  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:15.415149  353396 type.go:168] "Request Body" body=""
	I1213 10:36:15.415221  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:15.415540  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:15.914232  353396 type.go:168] "Request Body" body=""
	I1213 10:36:15.914308  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:15.914678  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:16.414245  353396 type.go:168] "Request Body" body=""
	I1213 10:36:16.414325  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:16.414657  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:16.914285  353396 type.go:168] "Request Body" body=""
	I1213 10:36:16.914367  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:16.914649  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:16.914725  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:17.414244  353396 type.go:168] "Request Body" body=""
	I1213 10:36:17.414333  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:17.414644  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:17.914739  353396 type.go:168] "Request Body" body=""
	I1213 10:36:17.914821  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:17.915139  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:18.414875  353396 type.go:168] "Request Body" body=""
	I1213 10:36:18.414955  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:18.415226  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:18.915006  353396 type.go:168] "Request Body" body=""
	I1213 10:36:18.915082  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:18.915415  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:18.915472  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:19.415096  353396 type.go:168] "Request Body" body=""
	I1213 10:36:19.415183  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:19.415488  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:19.914201  353396 type.go:168] "Request Body" body=""
	I1213 10:36:19.914273  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:19.914619  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:20.414338  353396 type.go:168] "Request Body" body=""
	I1213 10:36:20.414409  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:20.414746  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:20.914260  353396 type.go:168] "Request Body" body=""
	I1213 10:36:20.914335  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:20.914704  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:21.414252  353396 type.go:168] "Request Body" body=""
	I1213 10:36:21.414338  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:21.414656  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:21.414724  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:21.914251  353396 type.go:168] "Request Body" body=""
	I1213 10:36:21.914328  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:21.914668  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:22.414268  353396 type.go:168] "Request Body" body=""
	I1213 10:36:22.414350  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:22.414680  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:22.914474  353396 type.go:168] "Request Body" body=""
	I1213 10:36:22.914553  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:22.914836  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:23.414235  353396 type.go:168] "Request Body" body=""
	I1213 10:36:23.414326  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:23.414670  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:23.414743  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:23.914266  353396 type.go:168] "Request Body" body=""
	I1213 10:36:23.914367  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:23.914763  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:24.414152  353396 type.go:168] "Request Body" body=""
	I1213 10:36:24.414223  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:24.414481  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:24.914197  353396 type.go:168] "Request Body" body=""
	I1213 10:36:24.914304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:24.914663  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:25.414263  353396 type.go:168] "Request Body" body=""
	I1213 10:36:25.414339  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:25.414676  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:25.914948  353396 type.go:168] "Request Body" body=""
	I1213 10:36:25.915020  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:25.915277  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:25.915318  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:26.415116  353396 type.go:168] "Request Body" body=""
	I1213 10:36:26.415208  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:26.415550  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:26.914250  353396 type.go:168] "Request Body" body=""
	I1213 10:36:26.914329  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:26.914612  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:27.414291  353396 type.go:168] "Request Body" body=""
	I1213 10:36:27.414364  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:27.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:27.914739  353396 type.go:168] "Request Body" body=""
	I1213 10:36:27.914816  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:27.915095  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:28.414897  353396 type.go:168] "Request Body" body=""
	I1213 10:36:28.414982  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:28.415303  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:28.415358  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:28.915084  353396 type.go:168] "Request Body" body=""
	I1213 10:36:28.915156  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:28.915451  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:29.414204  353396 type.go:168] "Request Body" body=""
	I1213 10:36:29.414283  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:29.414602  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:29.914216  353396 type.go:168] "Request Body" body=""
	I1213 10:36:29.914292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:29.914661  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:30.414927  353396 type.go:168] "Request Body" body=""
	I1213 10:36:30.415000  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:30.415303  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:30.915117  353396 type.go:168] "Request Body" body=""
	I1213 10:36:30.915200  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:30.915511  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:30.915566  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:31.414255  353396 type.go:168] "Request Body" body=""
	I1213 10:36:31.414349  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:31.414739  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:31.914164  353396 type.go:168] "Request Body" body=""
	I1213 10:36:31.914237  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:31.914519  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:32.414229  353396 type.go:168] "Request Body" body=""
	I1213 10:36:32.414304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:32.414647  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:32.914523  353396 type.go:168] "Request Body" body=""
	I1213 10:36:32.914604  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:32.914915  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:33.414159  353396 type.go:168] "Request Body" body=""
	I1213 10:36:33.414232  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:33.414567  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:33.414632  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:33.914300  353396 type.go:168] "Request Body" body=""
	I1213 10:36:33.914382  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:33.914670  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:34.414374  353396 type.go:168] "Request Body" body=""
	I1213 10:36:34.414451  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:34.414727  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:34.914184  353396 type.go:168] "Request Body" body=""
	I1213 10:36:34.914264  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:34.914587  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:35.414286  353396 type.go:168] "Request Body" body=""
	I1213 10:36:35.414359  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:35.414670  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:35.414741  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:35.914405  353396 type.go:168] "Request Body" body=""
	I1213 10:36:35.914489  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:35.914832  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:36.415085  353396 type.go:168] "Request Body" body=""
	I1213 10:36:36.415160  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:36.415449  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:36.914164  353396 type.go:168] "Request Body" body=""
	I1213 10:36:36.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:36.914585  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:37.414308  353396 type.go:168] "Request Body" body=""
	I1213 10:36:37.414384  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:37.414780  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:37.414840  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:37.914758  353396 type.go:168] "Request Body" body=""
	I1213 10:36:37.914831  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:37.915157  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:38.414970  353396 type.go:168] "Request Body" body=""
	I1213 10:36:38.415052  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:38.415405  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:38.915122  353396 type.go:168] "Request Body" body=""
	I1213 10:36:38.915210  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:38.915558  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:39.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:36:39.414237  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:39.414542  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:39.914247  353396 type.go:168] "Request Body" body=""
	I1213 10:36:39.914324  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:39.914669  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:39.914747  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:40.414415  353396 type.go:168] "Request Body" body=""
	I1213 10:36:40.414494  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:40.414850  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:40.915098  353396 type.go:168] "Request Body" body=""
	I1213 10:36:40.915172  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:40.915425  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:41.414124  353396 type.go:168] "Request Body" body=""
	I1213 10:36:41.414207  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:41.414558  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:41.914180  353396 type.go:168] "Request Body" body=""
	I1213 10:36:41.914266  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:41.914604  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:42.415138  353396 type.go:168] "Request Body" body=""
	I1213 10:36:42.415216  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:42.415488  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:42.415535  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:42.914549  353396 type.go:168] "Request Body" body=""
	I1213 10:36:42.914622  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:42.914929  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:43.414232  353396 type.go:168] "Request Body" body=""
	I1213 10:36:43.414317  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:43.414680  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:43.914384  353396 type.go:168] "Request Body" body=""
	I1213 10:36:43.914452  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:43.914730  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:44.414224  353396 type.go:168] "Request Body" body=""
	I1213 10:36:44.414302  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:44.414657  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:44.914395  353396 type.go:168] "Request Body" body=""
	I1213 10:36:44.914480  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:44.914836  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:44.914896  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:45.414191  353396 type.go:168] "Request Body" body=""
	I1213 10:36:45.414264  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:45.414567  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:45.914170  353396 type.go:168] "Request Body" body=""
	I1213 10:36:45.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:45.914607  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:46.414263  353396 type.go:168] "Request Body" body=""
	I1213 10:36:46.414343  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:46.414668  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:46.914163  353396 type.go:168] "Request Body" body=""
	I1213 10:36:46.914242  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:46.914578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:47.414274  353396 type.go:168] "Request Body" body=""
	I1213 10:36:47.414359  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:47.414709  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:47.414762  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:47.914884  353396 type.go:168] "Request Body" body=""
	I1213 10:36:47.914961  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:47.915333  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:48.415033  353396 type.go:168] "Request Body" body=""
	I1213 10:36:48.415102  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:48.415408  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:48.914142  353396 type.go:168] "Request Body" body=""
	I1213 10:36:48.914217  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:48.914551  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:49.414248  353396 type.go:168] "Request Body" body=""
	I1213 10:36:49.414332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:49.414653  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:49.914171  353396 type.go:168] "Request Body" body=""
	I1213 10:36:49.914239  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:49.914490  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:49.914533  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:50.414250  353396 type.go:168] "Request Body" body=""
	I1213 10:36:50.414332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:50.414655  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:50.914247  353396 type.go:168] "Request Body" body=""
	I1213 10:36:50.914325  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:50.914719  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:51.415136  353396 type.go:168] "Request Body" body=""
	I1213 10:36:51.415212  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:51.415495  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:51.914194  353396 type.go:168] "Request Body" body=""
	I1213 10:36:51.914271  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:51.914606  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:51.914663  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:52.414196  353396 type.go:168] "Request Body" body=""
	I1213 10:36:52.414278  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:52.414628  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:52.914521  353396 type.go:168] "Request Body" body=""
	I1213 10:36:52.914591  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:52.914917  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:53.414620  353396 type.go:168] "Request Body" body=""
	I1213 10:36:53.414716  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:53.415008  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:53.914831  353396 type.go:168] "Request Body" body=""
	I1213 10:36:53.914908  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:53.915259  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:53.915316  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:54.415073  353396 type.go:168] "Request Body" body=""
	I1213 10:36:54.415143  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:54.415457  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:54.914176  353396 type.go:168] "Request Body" body=""
	I1213 10:36:54.914260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:54.914603  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:55.414307  353396 type.go:168] "Request Body" body=""
	I1213 10:36:55.414386  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:55.414744  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:55.914154  353396 type.go:168] "Request Body" body=""
	I1213 10:36:55.914224  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:55.914531  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:56.414237  353396 type.go:168] "Request Body" body=""
	I1213 10:36:56.414331  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:56.414644  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:56.414728  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:56.914208  353396 type.go:168] "Request Body" body=""
	I1213 10:36:56.914282  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:56.914593  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:57.414164  353396 type.go:168] "Request Body" body=""
	I1213 10:36:57.414233  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:57.414586  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:57.914740  353396 type.go:168] "Request Body" body=""
	I1213 10:36:57.914819  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:57.915172  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:58.414966  353396 type.go:168] "Request Body" body=""
	I1213 10:36:58.415044  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:58.415365  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:58.415427  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:58.914107  353396 type.go:168] "Request Body" body=""
	I1213 10:36:58.914182  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:58.914459  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:59.414161  353396 type.go:168] "Request Body" body=""
	I1213 10:36:59.414247  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:59.414593  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:59.914255  353396 type.go:168] "Request Body" body=""
	I1213 10:36:59.914339  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:59.914625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:00.414213  353396 type.go:168] "Request Body" body=""
	I1213 10:37:00.414303  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:00.414598  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:00.914236  353396 type.go:168] "Request Body" body=""
	I1213 10:37:00.914308  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:00.914641  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:37:00.914708  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:37:01.414238  353396 type.go:168] "Request Body" body=""
	I1213 10:37:01.414328  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:01.414724  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:01.914174  353396 type.go:168] "Request Body" body=""
	I1213 10:37:01.914261  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:01.914555  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:02.414290  353396 type.go:168] "Request Body" body=""
	I1213 10:37:02.414375  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:02.414752  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:02.914840  353396 type.go:168] "Request Body" body=""
	I1213 10:37:02.914918  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:02.915213  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:37:02.915263  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:37:03.415012  353396 type.go:168] "Request Body" body=""
	I1213 10:37:03.415090  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:03.415417  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:03.914198  353396 type.go:168] "Request Body" body=""
	I1213 10:37:03.914276  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:03.914604  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:04.414267  353396 type.go:168] "Request Body" body=""
	I1213 10:37:04.414350  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:04.414724  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:04.914155  353396 type.go:168] "Request Body" body=""
	I1213 10:37:04.914256  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:04.914593  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:05.414221  353396 type.go:168] "Request Body" body=""
	I1213 10:37:05.414327  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:05.414680  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:37:05.414769  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:37:05.914428  353396 type.go:168] "Request Body" body=""
	I1213 10:37:05.914509  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:05.914816  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:06.414144  353396 type.go:168] "Request Body" body=""
	I1213 10:37:06.414222  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:06.414490  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:06.914508  353396 type.go:168] "Request Body" body=""
	I1213 10:37:06.914592  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:06.914976  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:07.414224  353396 type.go:168] "Request Body" body=""
	I1213 10:37:07.414306  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:07.414615  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:07.914710  353396 type.go:168] "Request Body" body=""
	I1213 10:37:07.914821  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:07.915135  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:37:07.915217  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:37:08.414751  353396 node_ready.go:38] duration metric: took 6m0.000751586s for node "functional-652709" to be "Ready" ...
	I1213 10:37:08.417881  353396 out.go:203] 
	W1213 10:37:08.420786  353396 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 10:37:08.420808  353396 out.go:285] * 
	W1213 10:37:08.422957  353396 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:37:08.425703  353396 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.598797169Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.598872271Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.598967394Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.599038221Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.599099490Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.599162407Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.599221616Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.599281514Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.599349601Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.599433072Z" level=info msg="Connect containerd service"
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.599820046Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.600484496Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.612335546Z" level=info msg="Start subscribing containerd event"
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.612571864Z" level=info msg="Start recovering state"
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.612580390Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.612812277Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.652661888Z" level=info msg="Start event monitor"
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.652716773Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.652727021Z" level=info msg="Start streaming server"
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.652735989Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.652744260Z" level=info msg="runtime interface starting up..."
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.652752794Z" level=info msg="starting plugins..."
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.652765914Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:31:05 functional-652709 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 10:31:05 functional-652709 containerd[5259]: time="2025-12-13T10:31:05.654760247Z" level=info msg="containerd successfully booted in 0.080960s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:12.701832    8631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:12.702497    8631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:12.704056    8631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:12.704703    8631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:12.706415    8631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +12.907693] overlayfs: idmapped layers are currently not supported
	[Dec13 09:25] overlayfs: idmapped layers are currently not supported
	[ +26.192425] overlayfs: idmapped layers are currently not supported
	[Dec13 09:26] overlayfs: idmapped layers are currently not supported
	[ +25.729788] overlayfs: idmapped layers are currently not supported
	[Dec13 09:27] overlayfs: idmapped layers are currently not supported
	[Dec13 09:28] overlayfs: idmapped layers are currently not supported
	[Dec13 09:31] overlayfs: idmapped layers are currently not supported
	[Dec13 09:32] overlayfs: idmapped layers are currently not supported
	[ +17.979093] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec13 09:43] overlayfs: idmapped layers are currently not supported
	[Dec13 09:45] overlayfs: idmapped layers are currently not supported
	[ +25.885102] overlayfs: idmapped layers are currently not supported
	[Dec13 09:46] overlayfs: idmapped layers are currently not supported
	[ +22.078149] overlayfs: idmapped layers are currently not supported
	[Dec13 09:47] overlayfs: idmapped layers are currently not supported
	[Dec13 09:48] overlayfs: idmapped layers are currently not supported
	[Dec13 09:49] overlayfs: idmapped layers are currently not supported
	[Dec13 09:51] overlayfs: idmapped layers are currently not supported
	[ +17.043564] overlayfs: idmapped layers are currently not supported
	[Dec13 09:52] overlayfs: idmapped layers are currently not supported
	[Dec13 09:53] overlayfs: idmapped layers are currently not supported
	[Dec13 10:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:19] hrtimer: interrupt took 21247146 ns
	
	
	==> kernel <==
	 10:37:12 up  3:19,  0 user,  load average: 0.45, 0.36, 0.79
	Linux functional-652709 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:37:09 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:37:10 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 13 10:37:10 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:10 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:10 functional-652709 kubelet[8477]: E1213 10:37:10.241022    8477 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:37:10 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:37:10 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:37:10 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 813.
	Dec 13 10:37:10 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:10 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:11 functional-652709 kubelet[8506]: E1213 10:37:11.001546    8506 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:37:11 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:37:11 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:37:11 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 814.
	Dec 13 10:37:11 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:11 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:11 functional-652709 kubelet[8528]: E1213 10:37:11.707714    8528 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:37:11 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:37:11 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:37:12 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 815.
	Dec 13 10:37:12 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:12 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:12 functional-652709 kubelet[8568]: E1213 10:37:12.466542    8568 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:37:12 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:37:12 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709: exit status 2 (353.405342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-652709" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 kubectl -- --context functional-652709 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-652709 kubectl -- --context functional-652709 get pods: exit status 1 (110.727873ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-652709 kubectl -- --context functional-652709 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-652709
helpers_test.go:244: (dbg) docker inspect functional-652709:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f",
	        "Created": "2025-12-13T10:22:44.366993781Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 347931,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:22:44.437030763Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/hosts",
	        "LogPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f-json.log",
	        "Name": "/functional-652709",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-652709:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-652709",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f",
	                "LowerDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095-init/diff:/var/lib/docker/overlay2/5d0ca86320f2c06765995d644a73af30cd6ddb36f5c1a2a6b1ebb24af53cdabd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/merged",
	                "UpperDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/diff",
	                "WorkDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-652709",
	                "Source": "/var/lib/docker/volumes/functional-652709/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-652709",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-652709",
	                "name.minikube.sigs.k8s.io": "functional-652709",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "52e527b5bd789a02eb7efb651200033ed4929e5fc7545e9df042d3f777cc9782",
	            "SandboxKey": "/var/run/docker/netns/52e527b5bd78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-652709": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:23:08:9e:cb:13",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "344f2b940117dadb28d1ef1328f911c0446307288fdfafebfe59f38e473f79cb",
	                    "EndpointID": "8954f96e5987202be5715e7023384fe862744778b2520bccba28c57814f0980f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-652709",
	                        "0f6101071ca2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-652709 -n functional-652709
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-652709 -n functional-652709: exit status 2 (332.531987ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-319494 image ls --format short --alsologtostderr                                                                                             │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image   │ functional-319494 image ls --format yaml --alsologtostderr                                                                                              │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ ssh     │ functional-319494 ssh pgrep buildkitd                                                                                                                   │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │                     │
	│ image   │ functional-319494 image ls --format json --alsologtostderr                                                                                              │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image   │ functional-319494 image build -t localhost/my-image:functional-319494 testdata/build --alsologtostderr                                                  │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image   │ functional-319494 image ls --format table --alsologtostderr                                                                                             │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image   │ functional-319494 image ls                                                                                                                              │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ delete  │ -p functional-319494                                                                                                                                    │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ start   │ -p functional-652709 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │                     │
	│ start   │ -p functional-652709 --alsologtostderr -v=8                                                                                                             │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:31 UTC │                     │
	│ cache   │ functional-652709 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ functional-652709 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ functional-652709 cache add registry.k8s.io/pause:latest                                                                                                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ functional-652709 cache add minikube-local-cache-test:functional-652709                                                                                 │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ functional-652709 cache delete minikube-local-cache-test:functional-652709                                                                              │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ ssh     │ functional-652709 ssh sudo crictl images                                                                                                                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ ssh     │ functional-652709 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ ssh     │ functional-652709 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │                     │
	│ cache   │ functional-652709 cache reload                                                                                                                          │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ ssh     │ functional-652709 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ kubectl │ functional-652709 kubectl -- --context functional-652709 get pods                                                                                       │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:31:02
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:31:02.672113  353396 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:31:02.672249  353396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:31:02.672258  353396 out.go:374] Setting ErrFile to fd 2...
	I1213 10:31:02.672263  353396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:31:02.672511  353396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 10:31:02.672909  353396 out.go:368] Setting JSON to false
	I1213 10:31:02.673776  353396 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11616,"bootTime":1765610247,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 10:31:02.673896  353396 start.go:143] virtualization:  
	I1213 10:31:02.677410  353396 out.go:179] * [functional-652709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:31:02.681384  353396 notify.go:221] Checking for updates...
	I1213 10:31:02.681459  353396 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:31:02.684444  353396 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:31:02.687336  353396 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:31:02.690317  353396 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 10:31:02.693212  353396 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:31:02.696019  353396 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:31:02.699466  353396 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:31:02.699577  353396 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:31:02.725188  353396 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:31:02.725318  353396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:31:02.796082  353396 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:31:02.785556605 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:31:02.796187  353396 docker.go:319] overlay module found
	I1213 10:31:02.799378  353396 out.go:179] * Using the docker driver based on existing profile
	I1213 10:31:02.802341  353396 start.go:309] selected driver: docker
	I1213 10:31:02.802370  353396 start.go:927] validating driver "docker" against &{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:31:02.802524  353396 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:31:02.802652  353396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:31:02.859333  353396 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:31:02.849982894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:31:02.859762  353396 cni.go:84] Creating CNI manager for ""
	I1213 10:31:02.859824  353396 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:31:02.859884  353396 start.go:353] cluster config:
	{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:31:02.863117  353396 out.go:179] * Starting "functional-652709" primary control-plane node in "functional-652709" cluster
	I1213 10:31:02.865981  353396 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 10:31:02.868957  353396 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:31:02.871941  353396 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:31:02.871997  353396 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 10:31:02.872008  353396 cache.go:65] Caching tarball of preloaded images
	I1213 10:31:02.872055  353396 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:31:02.872104  353396 preload.go:238] Found /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 10:31:02.872129  353396 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 10:31:02.872236  353396 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/config.json ...
	I1213 10:31:02.890218  353396 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:31:02.890243  353396 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:31:02.890259  353396 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:31:02.890291  353396 start.go:360] acquireMachinesLock for functional-652709: {Name:mk6e8c40fbbb5af0bb2468340fd710875030300d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:31:02.890351  353396 start.go:364] duration metric: took 34.691µs to acquireMachinesLock for "functional-652709"
	I1213 10:31:02.890374  353396 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:31:02.890380  353396 fix.go:54] fixHost starting: 
	I1213 10:31:02.890658  353396 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:31:02.911217  353396 fix.go:112] recreateIfNeeded on functional-652709: state=Running err=<nil>
	W1213 10:31:02.911248  353396 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:31:02.914505  353396 out.go:252] * Updating the running docker "functional-652709" container ...
	I1213 10:31:02.914550  353396 machine.go:94] provisionDockerMachine start ...
	I1213 10:31:02.914653  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:02.937238  353396 main.go:143] libmachine: Using SSH client type: native
	I1213 10:31:02.937582  353396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:31:02.937592  353396 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:31:03.091334  353396 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-652709
	
	I1213 10:31:03.091359  353396 ubuntu.go:182] provisioning hostname "functional-652709"
	I1213 10:31:03.091424  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:03.110422  353396 main.go:143] libmachine: Using SSH client type: native
	I1213 10:31:03.110837  353396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:31:03.110855  353396 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-652709 && echo "functional-652709" | sudo tee /etc/hostname
	I1213 10:31:03.277113  353396 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-652709
	
	I1213 10:31:03.277196  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:03.294664  353396 main.go:143] libmachine: Using SSH client type: native
	I1213 10:31:03.295057  353396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:31:03.295079  353396 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-652709' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-652709/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-652709' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:31:03.447182  353396 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:31:03.447207  353396 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 10:31:03.447240  353396 ubuntu.go:190] setting up certificates
	I1213 10:31:03.447256  353396 provision.go:84] configureAuth start
	I1213 10:31:03.447330  353396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-652709
	I1213 10:31:03.465044  353396 provision.go:143] copyHostCerts
	I1213 10:31:03.465100  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 10:31:03.465141  353396 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 10:31:03.465148  353396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 10:31:03.465220  353396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 10:31:03.465329  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 10:31:03.465349  353396 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 10:31:03.465353  353396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 10:31:03.465383  353396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 10:31:03.465436  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 10:31:03.465453  353396 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 10:31:03.465457  353396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 10:31:03.465486  353396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 10:31:03.465541  353396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.functional-652709 san=[127.0.0.1 192.168.49.2 functional-652709 localhost minikube]
	I1213 10:31:03.927648  353396 provision.go:177] copyRemoteCerts
	I1213 10:31:03.927724  353396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:31:03.927763  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:03.947692  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:04.064623  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 10:31:04.064688  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:31:04.082355  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 10:31:04.082418  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:31:04.100866  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 10:31:04.100930  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:31:04.121259  353396 provision.go:87] duration metric: took 673.978127ms to configureAuth
	I1213 10:31:04.121312  353396 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:31:04.121495  353396 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:31:04.121509  353396 machine.go:97] duration metric: took 1.206951102s to provisionDockerMachine
	I1213 10:31:04.121518  353396 start.go:293] postStartSetup for "functional-652709" (driver="docker")
	I1213 10:31:04.121529  353396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:31:04.121586  353396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:31:04.121633  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:04.139400  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:04.246752  353396 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:31:04.250273  353396 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 10:31:04.250297  353396 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 10:31:04.250302  353396 command_runner.go:130] > VERSION_ID="12"
	I1213 10:31:04.250307  353396 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 10:31:04.250312  353396 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 10:31:04.250316  353396 command_runner.go:130] > ID=debian
	I1213 10:31:04.250320  353396 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 10:31:04.250325  353396 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 10:31:04.250331  353396 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 10:31:04.250368  353396 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:31:04.250390  353396 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:31:04.250401  353396 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 10:31:04.250463  353396 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 10:31:04.250545  353396 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 10:31:04.250556  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> /etc/ssl/certs/3089152.pem
	I1213 10:31:04.250633  353396 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts -> hosts in /etc/test/nested/copy/308915
	I1213 10:31:04.250715  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts -> /etc/test/nested/copy/308915/hosts
	I1213 10:31:04.250766  353396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/308915
	I1213 10:31:04.258199  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 10:31:04.275892  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts --> /etc/test/nested/copy/308915/hosts (40 bytes)
	I1213 10:31:04.293256  353396 start.go:296] duration metric: took 171.721845ms for postStartSetup
	I1213 10:31:04.293373  353396 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:31:04.293418  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:04.310428  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:04.412061  353396 command_runner.go:130] > 11%
	I1213 10:31:04.412134  353396 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:31:04.417606  353396 command_runner.go:130] > 174G
	I1213 10:31:04.418241  353396 fix.go:56] duration metric: took 1.527856492s for fixHost
	I1213 10:31:04.418260  353396 start.go:83] releasing machines lock for "functional-652709", held for 1.527895524s
	I1213 10:31:04.418328  353396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-652709
	I1213 10:31:04.443217  353396 ssh_runner.go:195] Run: cat /version.json
	I1213 10:31:04.443268  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:04.443564  353396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:31:04.443617  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:04.481371  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:04.481516  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:04.669844  353396 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 10:31:04.669910  353396 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 10:31:04.670045  353396 ssh_runner.go:195] Run: systemctl --version
	I1213 10:31:04.676239  353396 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 10:31:04.676276  353396 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 10:31:04.676350  353396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 10:31:04.680689  353396 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 10:31:04.680854  353396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:31:04.680918  353396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:31:04.688793  353396 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:31:04.688818  353396 start.go:496] detecting cgroup driver to use...
	I1213 10:31:04.688851  353396 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:31:04.688909  353396 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 10:31:04.704425  353396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:31:04.717662  353396 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:31:04.717728  353396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:31:04.733551  353396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:31:04.746955  353396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:31:04.865557  353396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:31:04.977869  353396 docker.go:234] disabling docker service ...
	I1213 10:31:04.977950  353396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:31:04.992461  353396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:31:05.013428  353396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:31:05.135601  353396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:31:05.282715  353396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:31:05.296047  353396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:31:05.308957  353396 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1213 10:31:05.310188  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:31:05.319385  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:31:05.328561  353396 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:31:05.328627  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:31:05.337573  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:31:05.346847  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:31:05.355976  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:31:05.364985  353396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:31:05.373424  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:31:05.382892  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:31:05.391826  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:31:05.401136  353396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:31:05.407987  353396 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 10:31:05.408928  353396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:31:05.416444  353396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:31:05.526748  353396 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:31:05.655433  353396 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 10:31:05.655515  353396 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 10:31:05.659353  353396 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1213 10:31:05.659378  353396 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 10:31:05.659389  353396 command_runner.go:130] > Device: 0,72	Inode: 1622        Links: 1
	I1213 10:31:05.659396  353396 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:31:05.659402  353396 command_runner.go:130] > Access: 2025-12-13 10:31:05.610211940 +0000
	I1213 10:31:05.659407  353396 command_runner.go:130] > Modify: 2025-12-13 10:31:05.610211940 +0000
	I1213 10:31:05.659412  353396 command_runner.go:130] > Change: 2025-12-13 10:31:05.610211940 +0000
	I1213 10:31:05.659416  353396 command_runner.go:130] >  Birth: -
	I1213 10:31:05.660005  353396 start.go:564] Will wait 60s for crictl version
	I1213 10:31:05.660063  353396 ssh_runner.go:195] Run: which crictl
	I1213 10:31:05.663492  353396 command_runner.go:130] > /usr/local/bin/crictl
	I1213 10:31:05.663579  353396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:31:05.685881  353396 command_runner.go:130] > Version:  0.1.0
	I1213 10:31:05.685946  353396 command_runner.go:130] > RuntimeName:  containerd
	I1213 10:31:05.686097  353396 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1213 10:31:05.686253  353396 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 10:31:05.688463  353396 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 10:31:05.688528  353396 ssh_runner.go:195] Run: containerd --version
	I1213 10:31:05.706883  353396 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 10:31:05.709639  353396 ssh_runner.go:195] Run: containerd --version
	I1213 10:31:05.727187  353396 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 10:31:05.735610  353396 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 10:31:05.738579  353396 cli_runner.go:164] Run: docker network inspect functional-652709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:31:05.753316  353396 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:31:05.757039  353396 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 10:31:05.757213  353396 kubeadm.go:884] updating cluster {Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:31:05.757336  353396 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:31:05.757417  353396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:31:05.778952  353396 command_runner.go:130] > {
	I1213 10:31:05.778976  353396 command_runner.go:130] >   "images":  [
	I1213 10:31:05.778980  353396 command_runner.go:130] >     {
	I1213 10:31:05.778990  353396 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:31:05.778995  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779001  353396 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:31:05.779005  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779009  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779018  353396 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 10:31:05.779024  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779028  353396 command_runner.go:130] >       "size":  "40636774",
	I1213 10:31:05.779032  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779041  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779045  353396 command_runner.go:130] >     },
	I1213 10:31:05.779053  353396 command_runner.go:130] >     {
	I1213 10:31:05.779066  353396 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:31:05.779074  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779080  353396 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:31:05.779087  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779091  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779102  353396 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:31:05.779106  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779110  353396 command_runner.go:130] >       "size":  "8034419",
	I1213 10:31:05.779116  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779120  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779128  353396 command_runner.go:130] >     },
	I1213 10:31:05.779131  353396 command_runner.go:130] >     {
	I1213 10:31:05.779138  353396 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:31:05.779145  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779150  353396 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:31:05.779157  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779163  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779175  353396 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 10:31:05.779181  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779185  353396 command_runner.go:130] >       "size":  "21168808",
	I1213 10:31:05.779190  353396 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:31:05.779195  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779199  353396 command_runner.go:130] >     },
	I1213 10:31:05.779204  353396 command_runner.go:130] >     {
	I1213 10:31:05.779211  353396 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:31:05.779218  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779224  353396 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:31:05.779231  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779235  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779246  353396 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 10:31:05.779252  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779257  353396 command_runner.go:130] >       "size":  "21136588",
	I1213 10:31:05.779267  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.779275  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.779279  353396 command_runner.go:130] >       },
	I1213 10:31:05.779283  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779290  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779299  353396 command_runner.go:130] >     },
	I1213 10:31:05.779303  353396 command_runner.go:130] >     {
	I1213 10:31:05.779314  353396 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:31:05.779321  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779327  353396 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:31:05.779334  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779338  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779350  353396 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 10:31:05.779357  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779361  353396 command_runner.go:130] >       "size":  "24678359",
	I1213 10:31:05.779365  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.779375  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.779384  353396 command_runner.go:130] >       },
	I1213 10:31:05.779388  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779396  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779400  353396 command_runner.go:130] >     },
	I1213 10:31:05.779407  353396 command_runner.go:130] >     {
	I1213 10:31:05.779414  353396 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:31:05.779421  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779428  353396 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:31:05.779435  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779439  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779450  353396 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 10:31:05.779454  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779461  353396 command_runner.go:130] >       "size":  "20661043",
	I1213 10:31:05.779465  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.779473  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.779477  353396 command_runner.go:130] >       },
	I1213 10:31:05.779489  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779497  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779501  353396 command_runner.go:130] >     },
	I1213 10:31:05.779507  353396 command_runner.go:130] >     {
	I1213 10:31:05.779515  353396 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:31:05.779522  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779527  353396 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:31:05.779534  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779538  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779546  353396 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:31:05.779553  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779557  353396 command_runner.go:130] >       "size":  "22429671",
	I1213 10:31:05.779561  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779567  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779571  353396 command_runner.go:130] >     },
	I1213 10:31:05.779578  353396 command_runner.go:130] >     {
	I1213 10:31:05.779586  353396 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:31:05.779593  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779600  353396 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:31:05.779606  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779610  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779622  353396 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 10:31:05.779628  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779633  353396 command_runner.go:130] >       "size":  "15391364",
	I1213 10:31:05.779641  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.779645  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.779648  353396 command_runner.go:130] >       },
	I1213 10:31:05.779654  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779658  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779666  353396 command_runner.go:130] >     },
	I1213 10:31:05.779669  353396 command_runner.go:130] >     {
	I1213 10:31:05.779681  353396 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:31:05.779688  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779698  353396 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:31:05.779704  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779709  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779720  353396 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 10:31:05.779726  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779730  353396 command_runner.go:130] >       "size":  "267939",
	I1213 10:31:05.779735  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.779741  353396 command_runner.go:130] >         "value":  "65535"
	I1213 10:31:05.779744  353396 command_runner.go:130] >       },
	I1213 10:31:05.779753  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779758  353396 command_runner.go:130] >       "pinned":  true
	I1213 10:31:05.779764  353396 command_runner.go:130] >     }
	I1213 10:31:05.779767  353396 command_runner.go:130] >   ]
	I1213 10:31:05.779770  353396 command_runner.go:130] > }
	I1213 10:31:05.781791  353396 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:31:05.781813  353396 containerd.go:534] Images already preloaded, skipping extraction
	I1213 10:31:05.781881  353396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:31:05.805396  353396 command_runner.go:130] > {
	I1213 10:31:05.805420  353396 command_runner.go:130] >   "images":  [
	I1213 10:31:05.805426  353396 command_runner.go:130] >     {
	I1213 10:31:05.805436  353396 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:31:05.805441  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805447  353396 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:31:05.805452  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805456  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805465  353396 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 10:31:05.805471  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805477  353396 command_runner.go:130] >       "size":  "40636774",
	I1213 10:31:05.805485  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.805490  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.805501  353396 command_runner.go:130] >     },
	I1213 10:31:05.805504  353396 command_runner.go:130] >     {
	I1213 10:31:05.805512  353396 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:31:05.805517  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805523  353396 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:31:05.805528  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805543  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805556  353396 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:31:05.805566  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805576  353396 command_runner.go:130] >       "size":  "8034419",
	I1213 10:31:05.805580  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.805590  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.805594  353396 command_runner.go:130] >     },
	I1213 10:31:05.805601  353396 command_runner.go:130] >     {
	I1213 10:31:05.805608  353396 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:31:05.805619  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805625  353396 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:31:05.805630  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805655  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805669  353396 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 10:31:05.805675  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805680  353396 command_runner.go:130] >       "size":  "21168808",
	I1213 10:31:05.805687  353396 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:31:05.805693  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.805697  353396 command_runner.go:130] >     },
	I1213 10:31:05.805701  353396 command_runner.go:130] >     {
	I1213 10:31:05.805707  353396 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:31:05.805715  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805720  353396 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:31:05.805727  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805732  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805743  353396 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 10:31:05.805750  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805754  353396 command_runner.go:130] >       "size":  "21136588",
	I1213 10:31:05.805762  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.805772  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.805778  353396 command_runner.go:130] >       },
	I1213 10:31:05.805783  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.805787  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.805795  353396 command_runner.go:130] >     },
	I1213 10:31:05.805803  353396 command_runner.go:130] >     {
	I1213 10:31:05.805810  353396 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:31:05.805818  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805824  353396 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:31:05.805846  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805855  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805863  353396 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 10:31:05.805867  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805873  353396 command_runner.go:130] >       "size":  "24678359",
	I1213 10:31:05.805877  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.805891  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.805894  353396 command_runner.go:130] >       },
	I1213 10:31:05.805899  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.805906  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.805910  353396 command_runner.go:130] >     },
	I1213 10:31:05.805917  353396 command_runner.go:130] >     {
	I1213 10:31:05.805924  353396 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:31:05.805931  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805938  353396 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:31:05.805941  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805946  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805956  353396 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 10:31:05.805963  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805967  353396 command_runner.go:130] >       "size":  "20661043",
	I1213 10:31:05.805972  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.805979  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.805983  353396 command_runner.go:130] >       },
	I1213 10:31:05.805991  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.805995  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.806002  353396 command_runner.go:130] >     },
	I1213 10:31:05.806005  353396 command_runner.go:130] >     {
	I1213 10:31:05.806012  353396 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:31:05.806021  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.806032  353396 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:31:05.806036  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806040  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.806048  353396 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:31:05.806055  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806059  353396 command_runner.go:130] >       "size":  "22429671",
	I1213 10:31:05.806068  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.806072  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.806078  353396 command_runner.go:130] >     },
	I1213 10:31:05.806082  353396 command_runner.go:130] >     {
	I1213 10:31:05.806089  353396 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:31:05.806096  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.806101  353396 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:31:05.806109  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806113  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.806124  353396 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 10:31:05.806131  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806135  353396 command_runner.go:130] >       "size":  "15391364",
	I1213 10:31:05.806139  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.806147  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.806151  353396 command_runner.go:130] >       },
	I1213 10:31:05.806159  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.806164  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.806171  353396 command_runner.go:130] >     },
	I1213 10:31:05.806174  353396 command_runner.go:130] >     {
	I1213 10:31:05.806180  353396 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:31:05.806186  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.806191  353396 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:31:05.806197  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806202  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.806213  353396 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 10:31:05.806217  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806230  353396 command_runner.go:130] >       "size":  "267939",
	I1213 10:31:05.806238  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.806242  353396 command_runner.go:130] >         "value":  "65535"
	I1213 10:31:05.806251  353396 command_runner.go:130] >       },
	I1213 10:31:05.806255  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.806259  353396 command_runner.go:130] >       "pinned":  true
	I1213 10:31:05.806262  353396 command_runner.go:130] >     }
	I1213 10:31:05.806267  353396 command_runner.go:130] >   ]
	I1213 10:31:05.806271  353396 command_runner.go:130] > }
	I1213 10:31:05.808725  353396 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:31:05.808749  353396 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:31:05.808757  353396 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 10:31:05.808887  353396 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-652709 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:31:05.808967  353396 ssh_runner.go:195] Run: sudo crictl info
	I1213 10:31:05.831572  353396 command_runner.go:130] > {
	I1213 10:31:05.831594  353396 command_runner.go:130] >   "cniconfig": {
	I1213 10:31:05.831601  353396 command_runner.go:130] >     "Networks": [
	I1213 10:31:05.831604  353396 command_runner.go:130] >       {
	I1213 10:31:05.831609  353396 command_runner.go:130] >         "Config": {
	I1213 10:31:05.831614  353396 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1213 10:31:05.831619  353396 command_runner.go:130] >           "Name": "cni-loopback",
	I1213 10:31:05.831623  353396 command_runner.go:130] >           "Plugins": [
	I1213 10:31:05.831627  353396 command_runner.go:130] >             {
	I1213 10:31:05.831631  353396 command_runner.go:130] >               "Network": {
	I1213 10:31:05.831635  353396 command_runner.go:130] >                 "ipam": {},
	I1213 10:31:05.831641  353396 command_runner.go:130] >                 "type": "loopback"
	I1213 10:31:05.831650  353396 command_runner.go:130] >               },
	I1213 10:31:05.831662  353396 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1213 10:31:05.831670  353396 command_runner.go:130] >             }
	I1213 10:31:05.831674  353396 command_runner.go:130] >           ],
	I1213 10:31:05.831684  353396 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1213 10:31:05.831688  353396 command_runner.go:130] >         },
	I1213 10:31:05.831696  353396 command_runner.go:130] >         "IFName": "lo"
	I1213 10:31:05.831703  353396 command_runner.go:130] >       }
	I1213 10:31:05.831707  353396 command_runner.go:130] >     ],
	I1213 10:31:05.831712  353396 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1213 10:31:05.831720  353396 command_runner.go:130] >     "PluginDirs": [
	I1213 10:31:05.831724  353396 command_runner.go:130] >       "/opt/cni/bin"
	I1213 10:31:05.831731  353396 command_runner.go:130] >     ],
	I1213 10:31:05.831736  353396 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1213 10:31:05.831743  353396 command_runner.go:130] >     "Prefix": "eth"
	I1213 10:31:05.831747  353396 command_runner.go:130] >   },
	I1213 10:31:05.831754  353396 command_runner.go:130] >   "config": {
	I1213 10:31:05.831762  353396 command_runner.go:130] >     "cdiSpecDirs": [
	I1213 10:31:05.831765  353396 command_runner.go:130] >       "/etc/cdi",
	I1213 10:31:05.831781  353396 command_runner.go:130] >       "/var/run/cdi"
	I1213 10:31:05.831789  353396 command_runner.go:130] >     ],
	I1213 10:31:05.831793  353396 command_runner.go:130] >     "cni": {
	I1213 10:31:05.831797  353396 command_runner.go:130] >       "binDir": "",
	I1213 10:31:05.831801  353396 command_runner.go:130] >       "binDirs": [
	I1213 10:31:05.831810  353396 command_runner.go:130] >         "/opt/cni/bin"
	I1213 10:31:05.831814  353396 command_runner.go:130] >       ],
	I1213 10:31:05.831818  353396 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1213 10:31:05.831821  353396 command_runner.go:130] >       "confTemplate": "",
	I1213 10:31:05.831825  353396 command_runner.go:130] >       "ipPref": "",
	I1213 10:31:05.831829  353396 command_runner.go:130] >       "maxConfNum": 1,
	I1213 10:31:05.831832  353396 command_runner.go:130] >       "setupSerially": false,
	I1213 10:31:05.831837  353396 command_runner.go:130] >       "useInternalLoopback": false
	I1213 10:31:05.831840  353396 command_runner.go:130] >     },
	I1213 10:31:05.831851  353396 command_runner.go:130] >     "containerd": {
	I1213 10:31:05.831859  353396 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1213 10:31:05.831864  353396 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1213 10:31:05.831869  353396 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1213 10:31:05.831872  353396 command_runner.go:130] >       "runtimes": {
	I1213 10:31:05.831875  353396 command_runner.go:130] >         "runc": {
	I1213 10:31:05.831879  353396 command_runner.go:130] >           "ContainerAnnotations": null,
	I1213 10:31:05.831884  353396 command_runner.go:130] >           "PodAnnotations": null,
	I1213 10:31:05.831891  353396 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1213 10:31:05.831895  353396 command_runner.go:130] >           "cgroupWritable": false,
	I1213 10:31:05.831899  353396 command_runner.go:130] >           "cniConfDir": "",
	I1213 10:31:05.831905  353396 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1213 10:31:05.831910  353396 command_runner.go:130] >           "io_type": "",
	I1213 10:31:05.831919  353396 command_runner.go:130] >           "options": {
	I1213 10:31:05.831924  353396 command_runner.go:130] >             "BinaryName": "",
	I1213 10:31:05.831929  353396 command_runner.go:130] >             "CriuImagePath": "",
	I1213 10:31:05.831936  353396 command_runner.go:130] >             "CriuWorkPath": "",
	I1213 10:31:05.831940  353396 command_runner.go:130] >             "IoGid": 0,
	I1213 10:31:05.831948  353396 command_runner.go:130] >             "IoUid": 0,
	I1213 10:31:05.831953  353396 command_runner.go:130] >             "NoNewKeyring": false,
	I1213 10:31:05.831961  353396 command_runner.go:130] >             "Root": "",
	I1213 10:31:05.831965  353396 command_runner.go:130] >             "ShimCgroup": "",
	I1213 10:31:05.831970  353396 command_runner.go:130] >             "SystemdCgroup": false
	I1213 10:31:05.831992  353396 command_runner.go:130] >           },
	I1213 10:31:05.831998  353396 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1213 10:31:05.832004  353396 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1213 10:31:05.832011  353396 command_runner.go:130] >           "runtimePath": "",
	I1213 10:31:05.832017  353396 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1213 10:31:05.832025  353396 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1213 10:31:05.832030  353396 command_runner.go:130] >           "snapshotter": ""
	I1213 10:31:05.832037  353396 command_runner.go:130] >         }
	I1213 10:31:05.832040  353396 command_runner.go:130] >       }
	I1213 10:31:05.832043  353396 command_runner.go:130] >     },
	I1213 10:31:05.832055  353396 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1213 10:31:05.832065  353396 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1213 10:31:05.832073  353396 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1213 10:31:05.832081  353396 command_runner.go:130] >     "disableApparmor": false,
	I1213 10:31:05.832086  353396 command_runner.go:130] >     "disableHugetlbController": true,
	I1213 10:31:05.832093  353396 command_runner.go:130] >     "disableProcMount": false,
	I1213 10:31:05.832098  353396 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1213 10:31:05.832106  353396 command_runner.go:130] >     "enableCDI": true,
	I1213 10:31:05.832110  353396 command_runner.go:130] >     "enableSelinux": false,
	I1213 10:31:05.832118  353396 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1213 10:31:05.832123  353396 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1213 10:31:05.832131  353396 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1213 10:31:05.832135  353396 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1213 10:31:05.832140  353396 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1213 10:31:05.832144  353396 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1213 10:31:05.832151  353396 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1213 10:31:05.832157  353396 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1213 10:31:05.832165  353396 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1213 10:31:05.832171  353396 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1213 10:31:05.832180  353396 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1213 10:31:05.832185  353396 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1213 10:31:05.832192  353396 command_runner.go:130] >   },
	I1213 10:31:05.832195  353396 command_runner.go:130] >   "features": {
	I1213 10:31:05.832204  353396 command_runner.go:130] >     "supplemental_groups_policy": true
	I1213 10:31:05.832208  353396 command_runner.go:130] >   },
	I1213 10:31:05.832212  353396 command_runner.go:130] >   "golang": "go1.24.9",
	I1213 10:31:05.832222  353396 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 10:31:05.832235  353396 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 10:31:05.832240  353396 command_runner.go:130] >   "runtimeHandlers": [
	I1213 10:31:05.832245  353396 command_runner.go:130] >     {
	I1213 10:31:05.832248  353396 command_runner.go:130] >       "features": {
	I1213 10:31:05.832257  353396 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 10:31:05.832262  353396 command_runner.go:130] >         "user_namespaces": true
	I1213 10:31:05.832268  353396 command_runner.go:130] >       }
	I1213 10:31:05.832276  353396 command_runner.go:130] >     },
	I1213 10:31:05.832283  353396 command_runner.go:130] >     {
	I1213 10:31:05.832287  353396 command_runner.go:130] >       "features": {
	I1213 10:31:05.832295  353396 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 10:31:05.832299  353396 command_runner.go:130] >         "user_namespaces": true
	I1213 10:31:05.832302  353396 command_runner.go:130] >       },
	I1213 10:31:05.832307  353396 command_runner.go:130] >       "name": "runc"
	I1213 10:31:05.832310  353396 command_runner.go:130] >     }
	I1213 10:31:05.832313  353396 command_runner.go:130] >   ],
	I1213 10:31:05.832316  353396 command_runner.go:130] >   "status": {
	I1213 10:31:05.832320  353396 command_runner.go:130] >     "conditions": [
	I1213 10:31:05.832325  353396 command_runner.go:130] >       {
	I1213 10:31:05.832330  353396 command_runner.go:130] >         "message": "",
	I1213 10:31:05.832337  353396 command_runner.go:130] >         "reason": "",
	I1213 10:31:05.832344  353396 command_runner.go:130] >         "status": true,
	I1213 10:31:05.832354  353396 command_runner.go:130] >         "type": "RuntimeReady"
	I1213 10:31:05.832362  353396 command_runner.go:130] >       },
	I1213 10:31:05.832365  353396 command_runner.go:130] >       {
	I1213 10:31:05.832375  353396 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1213 10:31:05.832380  353396 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1213 10:31:05.832383  353396 command_runner.go:130] >         "status": false,
	I1213 10:31:05.832388  353396 command_runner.go:130] >         "type": "NetworkReady"
	I1213 10:31:05.832396  353396 command_runner.go:130] >       },
	I1213 10:31:05.832399  353396 command_runner.go:130] >       {
	I1213 10:31:05.832422  353396 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1213 10:31:05.832434  353396 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1213 10:31:05.832444  353396 command_runner.go:130] >         "status": false,
	I1213 10:31:05.832451  353396 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1213 10:31:05.832454  353396 command_runner.go:130] >       }
	I1213 10:31:05.832457  353396 command_runner.go:130] >     ]
	I1213 10:31:05.832461  353396 command_runner.go:130] >   }
	I1213 10:31:05.832463  353396 command_runner.go:130] > }
	I1213 10:31:05.834983  353396 cni.go:84] Creating CNI manager for ""
	I1213 10:31:05.835008  353396 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:31:05.835032  353396 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:31:05.835055  353396 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-652709 NodeName:functional-652709 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:31:05.835177  353396 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-652709"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:31:05.835253  353396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:31:05.843333  353396 command_runner.go:130] > kubeadm
	I1213 10:31:05.843355  353396 command_runner.go:130] > kubectl
	I1213 10:31:05.843360  353396 command_runner.go:130] > kubelet
	I1213 10:31:05.843375  353396 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:31:05.843451  353396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:31:05.851169  353396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 10:31:05.865230  353396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:31:05.877883  353396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 10:31:05.891827  353396 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:31:05.896023  353396 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 10:31:05.896126  353396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:31:06.037110  353396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:31:06.663693  353396 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709 for IP: 192.168.49.2
	I1213 10:31:06.663826  353396 certs.go:195] generating shared ca certs ...
	I1213 10:31:06.663858  353396 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:31:06.664061  353396 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 10:31:06.664135  353396 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 10:31:06.664169  353396 certs.go:257] generating profile certs ...
	I1213 10:31:06.664331  353396 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.key
	I1213 10:31:06.664442  353396 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key.86e7afd1
	I1213 10:31:06.664517  353396 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key
	I1213 10:31:06.664552  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 10:31:06.664592  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 10:31:06.664634  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 10:31:06.664671  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 10:31:06.664701  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 10:31:06.664745  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 10:31:06.664781  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 10:31:06.664811  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 10:31:06.664893  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 10:31:06.664965  353396 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 10:31:06.664999  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:31:06.665056  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:31:06.665113  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:31:06.665174  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 10:31:06.665258  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 10:31:06.665367  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> /usr/share/ca-certificates/3089152.pem
	I1213 10:31:06.665414  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:06.665453  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem -> /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.666083  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:31:06.686373  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:31:06.706393  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:31:06.727893  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:31:06.748376  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:31:06.769115  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:31:06.788184  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:31:06.807317  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:31:06.826240  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 10:31:06.845063  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:31:06.863130  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 10:31:06.881577  353396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:31:06.894536  353396 ssh_runner.go:195] Run: openssl version
	I1213 10:31:06.900741  353396 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 10:31:06.901231  353396 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.909107  353396 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 10:31:06.916518  353396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.920250  353396 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.920295  353396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.920347  353396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.961321  353396 command_runner.go:130] > 51391683
	I1213 10:31:06.961405  353396 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:31:06.969200  353396 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 10:31:06.976714  353396 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 10:31:06.984537  353396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 10:31:06.988716  353396 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 10:31:06.988763  353396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 10:31:06.988817  353396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 10:31:07.029862  353396 command_runner.go:130] > 3ec20f2e
	I1213 10:31:07.030284  353396 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:31:07.037958  353396 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:07.045451  353396 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:31:07.053144  353396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:07.056994  353396 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:07.057051  353396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:07.057104  353396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:07.097856  353396 command_runner.go:130] > b5213941
	I1213 10:31:07.098292  353396 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:31:07.106039  353396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:31:07.109917  353396 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:31:07.109945  353396 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 10:31:07.109953  353396 command_runner.go:130] > Device: 259,1	Inode: 3399222     Links: 1
	I1213 10:31:07.109960  353396 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:31:07.109966  353396 command_runner.go:130] > Access: 2025-12-13 10:26:59.103845116 +0000
	I1213 10:31:07.109971  353396 command_runner.go:130] > Modify: 2025-12-13 10:22:52.641441584 +0000
	I1213 10:31:07.109977  353396 command_runner.go:130] > Change: 2025-12-13 10:22:52.641441584 +0000
	I1213 10:31:07.109982  353396 command_runner.go:130] >  Birth: 2025-12-13 10:22:52.641441584 +0000
	I1213 10:31:07.110079  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:31:07.151277  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.151699  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:31:07.192420  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.192514  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:31:07.233686  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.233923  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:31:07.275302  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.275760  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:31:07.324799  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.325290  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:31:07.377047  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.377629  353396 kubeadm.go:401] StartCluster: {Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:31:07.377757  353396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 10:31:07.377843  353396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:31:07.405423  353396 cri.go:89] found id: ""
	I1213 10:31:07.405508  353396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:31:07.414529  353396 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 10:31:07.414595  353396 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 10:31:07.414615  353396 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 10:31:07.415690  353396 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:31:07.415743  353396 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:31:07.415805  353396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:31:07.423401  353396 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:31:07.423850  353396 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-652709" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:31:07.423998  353396 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-307042/kubeconfig needs updating (will repair): [kubeconfig missing "functional-652709" cluster setting kubeconfig missing "functional-652709" context setting]
	I1213 10:31:07.424313  353396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:31:07.424829  353396 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:31:07.425032  353396 kapi.go:59] client config for functional-652709: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.key", CAFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:31:07.425626  353396 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 10:31:07.425778  353396 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 10:31:07.425812  353396 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 10:31:07.425854  353396 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 10:31:07.425888  353396 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 10:31:07.425723  353396 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 10:31:07.426245  353396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:31:07.437887  353396 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 10:31:07.437960  353396 kubeadm.go:602] duration metric: took 22.197398ms to restartPrimaryControlPlane
	I1213 10:31:07.437984  353396 kubeadm.go:403] duration metric: took 60.362619ms to StartCluster
	I1213 10:31:07.438027  353396 settings.go:142] acquiring lock: {Name:mk079e9a25ebbc2c8fbae42d4c6ed096a652c00b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:31:07.438107  353396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:31:07.438874  353396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:31:07.439133  353396 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 10:31:07.439572  353396 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:31:07.439649  353396 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:31:07.439895  353396 addons.go:70] Setting storage-provisioner=true in profile "functional-652709"
	I1213 10:31:07.439924  353396 addons.go:239] Setting addon storage-provisioner=true in "functional-652709"
	I1213 10:31:07.440086  353396 host.go:66] Checking if "functional-652709" exists ...
	I1213 10:31:07.439942  353396 addons.go:70] Setting default-storageclass=true in profile "functional-652709"
	I1213 10:31:07.440166  353396 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-652709"
	I1213 10:31:07.440530  353396 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:31:07.440672  353396 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:31:07.445924  353396 out.go:179] * Verifying Kubernetes components...
	I1213 10:31:07.449291  353396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:31:07.477163  353396 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:31:07.477818  353396 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:31:07.477982  353396 kapi.go:59] client config for functional-652709: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.key", CAFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:31:07.478289  353396 addons.go:239] Setting addon default-storageclass=true in "functional-652709"
	I1213 10:31:07.478317  353396 host.go:66] Checking if "functional-652709" exists ...
	I1213 10:31:07.478815  353396 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:31:07.480787  353396 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:07.480804  353396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:31:07.480857  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:07.506052  353396 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:07.506074  353396 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:31:07.506149  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:07.532221  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:07.553427  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:07.654835  353396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:31:07.677297  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:07.691553  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:08.413950  353396 node_ready.go:35] waiting up to 6m0s for node "functional-652709" to be "Ready" ...
	I1213 10:31:08.414025  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:08.414055  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.414088  353396 retry.go:31] will retry after 345.496875ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.414094  353396 type.go:168] "Request Body" body=""
	I1213 10:31:08.414127  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:08.414139  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.414145  353396 retry.go:31] will retry after 223.686843ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.414166  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:08.414498  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:08.639014  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:08.708995  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:08.709048  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.709067  353396 retry.go:31] will retry after 375.63163ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.760277  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:08.818789  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:08.818835  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.818856  353396 retry.go:31] will retry after 406.416897ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.915066  353396 type.go:168] "Request Body" body=""
	I1213 10:31:08.915143  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:08.915484  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:09.084944  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:09.142294  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:09.145823  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.145856  353396 retry.go:31] will retry after 462.162588ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.226047  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:09.284957  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:09.285005  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.285029  353396 retry.go:31] will retry after 590.841892ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.414170  353396 type.go:168] "Request Body" body=""
	I1213 10:31:09.414270  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:09.414569  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:09.609047  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:09.669723  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:09.669808  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.669831  353396 retry.go:31] will retry after 579.936823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.876057  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:09.914654  353396 type.go:168] "Request Body" body=""
	I1213 10:31:09.914781  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:09.915113  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:09.958653  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:09.959319  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.959356  353396 retry.go:31] will retry after 607.747477ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:10.250896  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:10.320327  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:10.320375  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:10.320395  353396 retry.go:31] will retry after 1.522220042s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:10.414670  353396 type.go:168] "Request Body" body=""
	I1213 10:31:10.414776  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:10.415078  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:10.415128  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:10.567453  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:10.637133  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:10.637170  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:10.637192  353396 retry.go:31] will retry after 1.738217883s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:10.914619  353396 type.go:168] "Request Body" body=""
	I1213 10:31:10.914713  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:10.915040  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:11.414837  353396 type.go:168] "Request Body" body=""
	I1213 10:31:11.414916  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:11.415223  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:11.842893  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:11.907661  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:11.907696  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:11.907728  353396 retry.go:31] will retry after 2.533033731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:11.915037  353396 type.go:168] "Request Body" body=""
	I1213 10:31:11.915117  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:11.915423  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:12.376116  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:12.414883  353396 type.go:168] "Request Body" body=""
	I1213 10:31:12.414962  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:12.415244  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:12.415286  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:12.436301  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:12.440043  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:12.440078  353396 retry.go:31] will retry after 2.549851387s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:12.914750  353396 type.go:168] "Request Body" body=""
	I1213 10:31:12.914826  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:12.915091  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:13.414886  353396 type.go:168] "Request Body" body=""
	I1213 10:31:13.414964  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:13.415325  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:13.914980  353396 type.go:168] "Request Body" body=""
	I1213 10:31:13.915058  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:13.915431  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:14.414144  353396 type.go:168] "Request Body" body=""
	I1213 10:31:14.414226  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:14.414516  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:14.441795  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:14.521460  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:14.521500  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:14.521521  353396 retry.go:31] will retry after 3.212514963s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:14.915209  353396 type.go:168] "Request Body" body=""
	I1213 10:31:14.915291  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:14.915586  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:14.915630  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:14.990917  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:15.080462  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:15.084181  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:15.084216  353396 retry.go:31] will retry after 3.733369975s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:15.414758  353396 type.go:168] "Request Body" body=""
	I1213 10:31:15.414836  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:15.415124  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:15.914893  353396 type.go:168] "Request Body" body=""
	I1213 10:31:15.914962  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:15.915239  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:16.415068  353396 type.go:168] "Request Body" body=""
	I1213 10:31:16.415147  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:16.415460  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:16.914139  353396 type.go:168] "Request Body" body=""
	I1213 10:31:16.914218  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:16.914520  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:17.414166  353396 type.go:168] "Request Body" body=""
	I1213 10:31:17.414237  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:17.414497  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:17.414542  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:17.734589  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:17.791638  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:17.795431  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:17.795464  353396 retry.go:31] will retry after 2.280639456s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:17.914828  353396 type.go:168] "Request Body" body=""
	I1213 10:31:17.914907  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:17.915229  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:18.415056  353396 type.go:168] "Request Body" body=""
	I1213 10:31:18.415138  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:18.415477  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:18.817969  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:18.882172  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:18.882215  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:18.882235  353396 retry.go:31] will retry after 4.138686797s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:18.914321  353396 type.go:168] "Request Body" body=""
	I1213 10:31:18.914392  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:18.914663  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:19.414265  353396 type.go:168] "Request Body" body=""
	I1213 10:31:19.414351  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:19.414671  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:19.414743  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:19.914452  353396 type.go:168] "Request Body" body=""
	I1213 10:31:19.914532  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:19.914885  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:20.077334  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:20.142139  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:20.142182  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:20.142203  353396 retry.go:31] will retry after 8.217804099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:20.414481  353396 type.go:168] "Request Body" body=""
	I1213 10:31:20.414554  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:20.414845  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:20.914228  353396 type.go:168] "Request Body" body=""
	I1213 10:31:20.914302  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:20.914590  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:21.414310  353396 type.go:168] "Request Body" body=""
	I1213 10:31:21.414387  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:21.414748  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:21.414804  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:21.914112  353396 type.go:168] "Request Body" body=""
	I1213 10:31:21.914192  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:21.914465  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:22.414190  353396 type.go:168] "Request Body" body=""
	I1213 10:31:22.414276  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:22.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:22.914222  353396 type.go:168] "Request Body" body=""
	I1213 10:31:22.914304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:22.914654  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:23.021940  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:23.082413  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:23.086273  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:23.086307  353396 retry.go:31] will retry after 3.228749017s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:23.414853  353396 type.go:168] "Request Body" body=""
	I1213 10:31:23.414928  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:23.415204  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:23.415248  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:23.915086  353396 type.go:168] "Request Body" body=""
	I1213 10:31:23.915169  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:23.915500  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:24.414244  353396 type.go:168] "Request Body" body=""
	I1213 10:31:24.414323  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:24.414750  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:24.914140  353396 type.go:168] "Request Body" body=""
	I1213 10:31:24.914235  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:24.914512  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:25.414276  353396 type.go:168] "Request Body" body=""
	I1213 10:31:25.414350  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:25.414719  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:25.914418  353396 type.go:168] "Request Body" body=""
	I1213 10:31:25.914503  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:25.914851  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:25.914921  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:26.315317  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:26.370308  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:26.374436  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:26.374468  353396 retry.go:31] will retry after 6.181513775s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:26.414616  353396 type.go:168] "Request Body" body=""
	I1213 10:31:26.414702  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:26.414956  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:26.914223  353396 type.go:168] "Request Body" body=""
	I1213 10:31:26.914299  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:26.914631  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:27.414210  353396 type.go:168] "Request Body" body=""
	I1213 10:31:27.414287  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:27.414626  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:27.914667  353396 type.go:168] "Request Body" body=""
	I1213 10:31:27.914756  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:27.915024  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:27.915076  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:28.360839  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:28.414331  353396 type.go:168] "Request Body" body=""
	I1213 10:31:28.414406  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:28.414626  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:28.418709  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:28.418758  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:28.418778  353396 retry.go:31] will retry after 9.214302946s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:28.914367  353396 type.go:168] "Request Body" body=""
	I1213 10:31:28.914492  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:28.914860  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:29.414102  353396 type.go:168] "Request Body" body=""
	I1213 10:31:29.414175  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:29.414432  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:29.914164  353396 type.go:168] "Request Body" body=""
	I1213 10:31:29.914249  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:29.914544  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:30.414171  353396 type.go:168] "Request Body" body=""
	I1213 10:31:30.414256  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:30.414595  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:30.414647  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:30.914147  353396 type.go:168] "Request Body" body=""
	I1213 10:31:30.914252  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:30.914572  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:31.414262  353396 type.go:168] "Request Body" body=""
	I1213 10:31:31.414347  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:31.414732  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:31.914303  353396 type.go:168] "Request Body" body=""
	I1213 10:31:31.914387  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:31.914757  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:32.415148  353396 type.go:168] "Request Body" body=""
	I1213 10:31:32.415224  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:32.415495  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:32.415554  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:32.557021  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:32.617384  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:32.617431  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:32.617463  353396 retry.go:31] will retry after 16.934984193s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:32.914304  353396 type.go:168] "Request Body" body=""
	I1213 10:31:32.914388  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:32.914742  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:33.414206  353396 type.go:168] "Request Body" body=""
	I1213 10:31:33.414289  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:33.414637  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:33.914139  353396 type.go:168] "Request Body" body=""
	I1213 10:31:33.914219  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:33.914504  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:34.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:31:34.414324  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:34.414665  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:34.914262  353396 type.go:168] "Request Body" body=""
	I1213 10:31:34.914338  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:34.914682  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:34.914754  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:35.414981  353396 type.go:168] "Request Body" body=""
	I1213 10:31:35.415048  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:35.415330  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:35.915144  353396 type.go:168] "Request Body" body=""
	I1213 10:31:35.915224  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:35.915612  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:36.414208  353396 type.go:168] "Request Body" body=""
	I1213 10:31:36.414294  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:36.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:36.914192  353396 type.go:168] "Request Body" body=""
	I1213 10:31:36.914277  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:36.914578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:37.414237  353396 type.go:168] "Request Body" body=""
	I1213 10:31:37.414313  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:37.414629  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:37.414735  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:37.633334  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:37.695165  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:37.698650  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:37.698681  353396 retry.go:31] will retry after 9.333447966s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:37.915161  353396 type.go:168] "Request Body" body=""
	I1213 10:31:37.915240  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:37.915589  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:38.414123  353396 type.go:168] "Request Body" body=""
	I1213 10:31:38.414195  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:38.414520  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:38.914231  353396 type.go:168] "Request Body" body=""
	I1213 10:31:38.914310  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:38.914622  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:39.414370  353396 type.go:168] "Request Body" body=""
	I1213 10:31:39.414450  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:39.414771  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:39.414825  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:39.914151  353396 type.go:168] "Request Body" body=""
	I1213 10:31:39.914247  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:39.914597  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:40.414232  353396 type.go:168] "Request Body" body=""
	I1213 10:31:40.414305  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:40.414590  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:40.914274  353396 type.go:168] "Request Body" body=""
	I1213 10:31:40.914351  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:40.914714  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:41.414140  353396 type.go:168] "Request Body" body=""
	I1213 10:31:41.414213  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:41.414477  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:41.914180  353396 type.go:168] "Request Body" body=""
	I1213 10:31:41.914281  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:41.914609  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:41.914666  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:42.414204  353396 type.go:168] "Request Body" body=""
	I1213 10:31:42.414282  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:42.414600  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:42.914120  353396 type.go:168] "Request Body" body=""
	I1213 10:31:42.914194  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:42.914564  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:43.414289  353396 type.go:168] "Request Body" body=""
	I1213 10:31:43.414375  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:43.414737  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:43.914224  353396 type.go:168] "Request Body" body=""
	I1213 10:31:43.914304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:43.914640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:43.914712  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:44.414209  353396 type.go:168] "Request Body" body=""
	I1213 10:31:44.414295  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:44.414641  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:44.914233  353396 type.go:168] "Request Body" body=""
	I1213 10:31:44.914306  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:44.914657  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:45.414435  353396 type.go:168] "Request Body" body=""
	I1213 10:31:45.414551  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:45.414971  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:45.914734  353396 type.go:168] "Request Body" body=""
	I1213 10:31:45.914804  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:45.915154  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:45.915214  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:46.414939  353396 type.go:168] "Request Body" body=""
	I1213 10:31:46.415012  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:46.415313  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:46.915102  353396 type.go:168] "Request Body" body=""
	I1213 10:31:46.915186  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:46.915495  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:47.032831  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:47.089360  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:47.092850  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:47.092882  353396 retry.go:31] will retry after 14.257705184s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:47.414212  353396 type.go:168] "Request Body" body=""
	I1213 10:31:47.414287  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:47.414544  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:47.914676  353396 type.go:168] "Request Body" body=""
	I1213 10:31:47.914771  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:47.915126  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:48.414973  353396 type.go:168] "Request Body" body=""
	I1213 10:31:48.415048  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:48.415397  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:48.415453  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:48.914935  353396 type.go:168] "Request Body" body=""
	I1213 10:31:48.915016  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:48.915282  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:49.415024  353396 type.go:168] "Request Body" body=""
	I1213 10:31:49.415102  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:49.415400  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:49.552673  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:49.614333  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:49.614392  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:49.614413  353396 retry.go:31] will retry after 23.024485713s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:49.914879  353396 type.go:168] "Request Body" body=""
	I1213 10:31:49.914950  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:49.915276  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:50.415038  353396 type.go:168] "Request Body" body=""
	I1213 10:31:50.415112  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:50.415429  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:50.415489  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:50.914923  353396 type.go:168] "Request Body" body=""
	I1213 10:31:50.915005  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:50.915323  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:51.414987  353396 type.go:168] "Request Body" body=""
	I1213 10:31:51.415064  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:51.415444  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:51.915111  353396 type.go:168] "Request Body" body=""
	I1213 10:31:51.915192  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:51.915480  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:52.414202  353396 type.go:168] "Request Body" body=""
	I1213 10:31:52.414285  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:52.414620  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:52.914489  353396 type.go:168] "Request Body" body=""
	I1213 10:31:52.914562  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:52.914926  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:52.914988  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:53.414747  353396 type.go:168] "Request Body" body=""
	I1213 10:31:53.414820  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:53.415090  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:53.914866  353396 type.go:168] "Request Body" body=""
	I1213 10:31:53.914939  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:53.915273  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:54.415083  353396 type.go:168] "Request Body" body=""
	I1213 10:31:54.415160  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:54.415481  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:54.914141  353396 type.go:168] "Request Body" body=""
	I1213 10:31:54.914222  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:54.914536  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:55.414218  353396 type.go:168] "Request Body" body=""
	I1213 10:31:55.414293  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:55.414637  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:55.414730  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:55.914444  353396 type.go:168] "Request Body" body=""
	I1213 10:31:55.914529  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:55.914897  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:56.414701  353396 type.go:168] "Request Body" body=""
	I1213 10:31:56.414795  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:56.415073  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:56.914860  353396 type.go:168] "Request Body" body=""
	I1213 10:31:56.914937  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:56.915228  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:57.415017  353396 type.go:168] "Request Body" body=""
	I1213 10:31:57.415092  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:57.415406  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:57.415455  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:57.914498  353396 type.go:168] "Request Body" body=""
	I1213 10:31:57.914564  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:57.914847  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:58.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:31:58.414333  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:58.414679  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:58.914256  353396 type.go:168] "Request Body" body=""
	I1213 10:31:58.914332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:58.914624  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:59.414297  353396 type.go:168] "Request Body" body=""
	I1213 10:31:59.414370  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:59.414637  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:59.914203  353396 type.go:168] "Request Body" body=""
	I1213 10:31:59.914281  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:59.914613  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:59.914668  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:00.414938  353396 type.go:168] "Request Body" body=""
	I1213 10:32:00.415045  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:00.415391  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:00.914138  353396 type.go:168] "Request Body" body=""
	I1213 10:32:00.914218  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:00.914514  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:01.350855  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:32:01.414382  353396 type.go:168] "Request Body" body=""
	I1213 10:32:01.414452  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:01.414751  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:01.421471  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:01.421509  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:32:01.421528  353396 retry.go:31] will retry after 32.770422349s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:32:01.914172  353396 type.go:168] "Request Body" body=""
	I1213 10:32:01.914251  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:01.914603  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:02.414231  353396 type.go:168] "Request Body" body=""
	I1213 10:32:02.414337  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:02.414661  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:02.414753  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:02.914852  353396 type.go:168] "Request Body" body=""
	I1213 10:32:02.914942  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:02.915291  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:03.415068  353396 type.go:168] "Request Body" body=""
	I1213 10:32:03.415140  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:03.415560  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:03.914265  353396 type.go:168] "Request Body" body=""
	I1213 10:32:03.914365  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:03.914734  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:04.414487  353396 type.go:168] "Request Body" body=""
	I1213 10:32:04.414564  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:04.414920  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:04.414976  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:04.914751  353396 type.go:168] "Request Body" body=""
	I1213 10:32:04.914822  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:04.915267  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:05.415063  353396 type.go:168] "Request Body" body=""
	I1213 10:32:05.415138  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:05.415446  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:05.914158  353396 type.go:168] "Request Body" body=""
	I1213 10:32:05.914237  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:05.914537  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:06.414151  353396 type.go:168] "Request Body" body=""
	I1213 10:32:06.414241  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:06.414588  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:06.914189  353396 type.go:168] "Request Body" body=""
	I1213 10:32:06.914264  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:06.914626  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:06.914721  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:07.414237  353396 type.go:168] "Request Body" body=""
	I1213 10:32:07.414336  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:07.414675  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:07.914726  353396 type.go:168] "Request Body" body=""
	I1213 10:32:07.914801  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:07.915094  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:08.414945  353396 type.go:168] "Request Body" body=""
	I1213 10:32:08.415038  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:08.415395  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:08.914139  353396 type.go:168] "Request Body" body=""
	I1213 10:32:08.914221  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:08.914527  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:09.414118  353396 type.go:168] "Request Body" body=""
	I1213 10:32:09.414186  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:09.414531  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:09.414607  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:09.914205  353396 type.go:168] "Request Body" body=""
	I1213 10:32:09.914276  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:09.914632  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:10.414215  353396 type.go:168] "Request Body" body=""
	I1213 10:32:10.414292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:10.414629  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:10.914203  353396 type.go:168] "Request Body" body=""
	I1213 10:32:10.914292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:10.914593  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:11.414242  353396 type.go:168] "Request Body" body=""
	I1213 10:32:11.414348  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:11.414703  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:11.414757  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:11.914433  353396 type.go:168] "Request Body" body=""
	I1213 10:32:11.914511  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:11.914889  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:12.414571  353396 type.go:168] "Request Body" body=""
	I1213 10:32:12.414678  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:12.414978  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:12.639532  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:32:12.701723  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:12.701768  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:32:12.701788  353396 retry.go:31] will retry after 24.373252759s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:32:12.915117  353396 type.go:168] "Request Body" body=""
	I1213 10:32:12.915211  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:12.915511  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:13.414252  353396 type.go:168] "Request Body" body=""
	I1213 10:32:13.414325  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:13.414721  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:13.414794  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:13.914428  353396 type.go:168] "Request Body" body=""
	I1213 10:32:13.914518  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:13.914913  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:14.414265  353396 type.go:168] "Request Body" body=""
	I1213 10:32:14.414377  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:14.414786  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:14.914281  353396 type.go:168] "Request Body" body=""
	I1213 10:32:14.914360  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:14.914710  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:15.414252  353396 type.go:168] "Request Body" body=""
	I1213 10:32:15.414344  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:15.414630  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:15.914243  353396 type.go:168] "Request Body" body=""
	I1213 10:32:15.914331  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:15.914660  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:15.914750  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:16.414450  353396 type.go:168] "Request Body" body=""
	I1213 10:32:16.414531  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:16.414846  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:16.914165  353396 type.go:168] "Request Body" body=""
	I1213 10:32:16.914233  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:16.914541  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:17.414213  353396 type.go:168] "Request Body" body=""
	I1213 10:32:17.414341  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:17.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:17.914712  353396 type.go:168] "Request Body" body=""
	I1213 10:32:17.914803  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:17.915126  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:17.915184  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:18.414920  353396 type.go:168] "Request Body" body=""
	I1213 10:32:18.415009  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:18.415286  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:18.915162  353396 type.go:168] "Request Body" body=""
	I1213 10:32:18.915251  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:18.915598  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:19.414275  353396 type.go:168] "Request Body" body=""
	I1213 10:32:19.414357  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:19.414661  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:19.914425  353396 type.go:168] "Request Body" body=""
	I1213 10:32:19.914592  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:19.914937  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:20.414768  353396 type.go:168] "Request Body" body=""
	I1213 10:32:20.414852  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:20.415220  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:20.415278  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:20.915055  353396 type.go:168] "Request Body" body=""
	I1213 10:32:20.915156  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:20.915495  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:21.414184  353396 type.go:168] "Request Body" body=""
	I1213 10:32:21.414260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:21.414555  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:21.914250  353396 type.go:168] "Request Body" body=""
	I1213 10:32:21.914326  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:21.914677  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:22.414287  353396 type.go:168] "Request Body" body=""
	I1213 10:32:22.414370  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:22.414741  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:22.914735  353396 type.go:168] "Request Body" body=""
	I1213 10:32:22.914804  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:22.915060  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:22.915107  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:23.414877  353396 type.go:168] "Request Body" body=""
	I1213 10:32:23.414953  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:23.415252  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:23.915036  353396 type.go:168] "Request Body" body=""
	I1213 10:32:23.915115  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:23.915451  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:24.415135  353396 type.go:168] "Request Body" body=""
	I1213 10:32:24.415211  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:24.415473  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:24.914198  353396 type.go:168] "Request Body" body=""
	I1213 10:32:24.914282  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:24.914640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:25.414436  353396 type.go:168] "Request Body" body=""
	I1213 10:32:25.414514  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:25.414854  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:25.414914  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:25.914152  353396 type.go:168] "Request Body" body=""
	I1213 10:32:25.914219  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:25.914483  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:26.414214  353396 type.go:168] "Request Body" body=""
	I1213 10:32:26.414314  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:26.414636  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:26.914231  353396 type.go:168] "Request Body" body=""
	I1213 10:32:26.914307  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:26.914637  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:27.414336  353396 type.go:168] "Request Body" body=""
	I1213 10:32:27.414402  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:27.414666  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:27.914790  353396 type.go:168] "Request Body" body=""
	I1213 10:32:27.914883  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:27.915207  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:27.915256  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:28.414990  353396 type.go:168] "Request Body" body=""
	I1213 10:32:28.415074  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:28.415436  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:28.915099  353396 type.go:168] "Request Body" body=""
	I1213 10:32:28.915173  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:28.915437  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:29.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:32:29.414250  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:29.414561  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:29.914302  353396 type.go:168] "Request Body" body=""
	I1213 10:32:29.914399  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:29.914733  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:30.414157  353396 type.go:168] "Request Body" body=""
	I1213 10:32:30.414241  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:30.414552  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:30.414604  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:30.914233  353396 type.go:168] "Request Body" body=""
	I1213 10:32:30.914307  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:30.914656  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:31.414263  353396 type.go:168] "Request Body" body=""
	I1213 10:32:31.414357  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:31.414708  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:31.914203  353396 type.go:168] "Request Body" body=""
	I1213 10:32:31.914273  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:31.914531  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:32.414222  353396 type.go:168] "Request Body" body=""
	I1213 10:32:32.414304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:32.414640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:32.414727  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:32.914510  353396 type.go:168] "Request Body" body=""
	I1213 10:32:32.914599  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:32.914973  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:33.414825  353396 type.go:168] "Request Body" body=""
	I1213 10:32:33.414915  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:33.415280  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:33.915101  353396 type.go:168] "Request Body" body=""
	I1213 10:32:33.915178  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:33.915518  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:34.192937  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:32:34.265284  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:34.265320  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:34.265405  353396 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:32:34.414970  353396 type.go:168] "Request Body" body=""
	I1213 10:32:34.415052  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:34.415423  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:34.415491  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:34.914214  353396 type.go:168] "Request Body" body=""
	I1213 10:32:34.914301  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:34.914655  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:35.414268  353396 type.go:168] "Request Body" body=""
	I1213 10:32:35.414356  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:35.414678  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:35.914239  353396 type.go:168] "Request Body" body=""
	I1213 10:32:35.914322  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:35.914704  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:36.414407  353396 type.go:168] "Request Body" body=""
	I1213 10:32:36.414485  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:36.414823  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:36.914200  353396 type.go:168] "Request Body" body=""
	I1213 10:32:36.914292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:36.914625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:36.914719  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:37.076016  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:32:37.141132  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:37.141183  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:37.141286  353396 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:32:37.146231  353396 out.go:179] * Enabled addons: 
	I1213 10:32:37.149102  353396 addons.go:530] duration metric: took 1m29.709445532s for enable addons: enabled=[]
	I1213 10:32:37.414592  353396 type.go:168] "Request Body" body=""
	I1213 10:32:37.414736  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:37.415128  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:37.914163  353396 type.go:168] "Request Body" body=""
	I1213 10:32:37.914246  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:37.914580  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:38.414336  353396 type.go:168] "Request Body" body=""
	I1213 10:32:38.414415  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:38.414780  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:38.914239  353396 type.go:168] "Request Body" body=""
	I1213 10:32:38.914317  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:38.914675  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:38.914752  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:39.414390  353396 type.go:168] "Request Body" body=""
	I1213 10:32:39.414462  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:39.414811  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:39.914220  353396 type.go:168] "Request Body" body=""
	I1213 10:32:39.914296  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:39.914620  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:40.414231  353396 type.go:168] "Request Body" body=""
	I1213 10:32:40.414307  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:40.414622  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:40.914193  353396 type.go:168] "Request Body" body=""
	I1213 10:32:40.914271  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:40.914548  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:41.414251  353396 type.go:168] "Request Body" body=""
	I1213 10:32:41.414348  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:41.414708  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:41.414763  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:41.914229  353396 type.go:168] "Request Body" body=""
	I1213 10:32:41.914327  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:41.914643  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:42.414161  353396 type.go:168] "Request Body" body=""
	I1213 10:32:42.414248  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:42.414516  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:42.914567  353396 type.go:168] "Request Body" body=""
	I1213 10:32:42.914643  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:42.914974  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:43.414788  353396 type.go:168] "Request Body" body=""
	I1213 10:32:43.414863  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:43.415192  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:43.415248  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:43.915667  353396 type.go:168] "Request Body" body=""
	I1213 10:32:43.915743  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:43.916016  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:44.414833  353396 type.go:168] "Request Body" body=""
	I1213 10:32:44.414913  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:44.415264  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:44.915103  353396 type.go:168] "Request Body" body=""
	I1213 10:32:44.915182  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:44.915522  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:45.414185  353396 type.go:168] "Request Body" body=""
	I1213 10:32:45.414262  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:45.414578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:45.914231  353396 type.go:168] "Request Body" body=""
	I1213 10:32:45.914307  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:45.914655  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:45.914730  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:46.414248  353396 type.go:168] "Request Body" body=""
	I1213 10:32:46.414348  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:46.414706  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:46.914404  353396 type.go:168] "Request Body" body=""
	I1213 10:32:46.914482  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:46.914848  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:47.414245  353396 type.go:168] "Request Body" body=""
	I1213 10:32:47.414332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:47.414670  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:47.915115  353396 type.go:168] "Request Body" body=""
	I1213 10:32:47.915188  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:47.915496  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:47.915548  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:48.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:32:48.414231  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:48.414501  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:48.914202  353396 type.go:168] "Request Body" body=""
	I1213 10:32:48.914276  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:48.914656  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:49.414387  353396 type.go:168] "Request Body" body=""
	I1213 10:32:49.414468  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:49.414814  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:49.914540  353396 type.go:168] "Request Body" body=""
	I1213 10:32:49.914615  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:49.914986  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:50.414789  353396 type.go:168] "Request Body" body=""
	I1213 10:32:50.414867  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:50.415215  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:50.415272  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:50.915036  353396 type.go:168] "Request Body" body=""
	I1213 10:32:50.915111  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:50.915455  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:51.414115  353396 type.go:168] "Request Body" body=""
	I1213 10:32:51.414190  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:51.414454  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:51.914146  353396 type.go:168] "Request Body" body=""
	I1213 10:32:51.914227  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:51.914572  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:52.414290  353396 type.go:168] "Request Body" body=""
	I1213 10:32:52.414382  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:52.414734  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:52.914517  353396 type.go:168] "Request Body" body=""
	I1213 10:32:52.914591  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:52.914875  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:52.914926  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:53.414207  353396 type.go:168] "Request Body" body=""
	I1213 10:32:53.414292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:53.414618  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:53.914425  353396 type.go:168] "Request Body" body=""
	I1213 10:32:53.914515  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:53.914900  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:54.414164  353396 type.go:168] "Request Body" body=""
	I1213 10:32:54.414246  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:54.414585  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:54.915092  353396 type.go:168] "Request Body" body=""
	I1213 10:32:54.915167  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:54.915487  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:54.915545  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:55.414202  353396 type.go:168] "Request Body" body=""
	I1213 10:32:55.414280  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:55.414623  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:55.914337  353396 type.go:168] "Request Body" body=""
	I1213 10:32:55.914403  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:55.914665  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:56.415120  353396 type.go:168] "Request Body" body=""
	I1213 10:32:56.415206  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:56.415536  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:56.914238  353396 type.go:168] "Request Body" body=""
	I1213 10:32:56.914316  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:56.914647  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:57.414233  353396 type.go:168] "Request Body" body=""
	I1213 10:32:57.414314  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:57.414566  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:57.414610  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:57.914675  353396 type.go:168] "Request Body" body=""
	I1213 10:32:57.914760  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:57.915078  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:58.414843  353396 type.go:168] "Request Body" body=""
	I1213 10:32:58.414921  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:58.415260  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:58.914928  353396 type.go:168] "Request Body" body=""
	I1213 10:32:58.914994  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:58.915260  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:59.414997  353396 type.go:168] "Request Body" body=""
	I1213 10:32:59.415070  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:59.415409  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:59.415463  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:59.915087  353396 type.go:168] "Request Body" body=""
	I1213 10:32:59.915169  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:59.915509  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:00.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:33:00.414313  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:00.414605  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:00.914240  353396 type.go:168] "Request Body" body=""
	I1213 10:33:00.914313  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:00.914656  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:01.414407  353396 type.go:168] "Request Body" body=""
	I1213 10:33:01.414488  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:01.414812  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:01.914174  353396 type.go:168] "Request Body" body=""
	I1213 10:33:01.914242  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:01.914578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:01.914642  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:02.414226  353396 type.go:168] "Request Body" body=""
	I1213 10:33:02.414314  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:02.414834  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:02.914840  353396 type.go:168] "Request Body" body=""
	I1213 10:33:02.914918  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:02.915280  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:03.415005  353396 type.go:168] "Request Body" body=""
	I1213 10:33:03.415071  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:03.415330  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:03.915080  353396 type.go:168] "Request Body" body=""
	I1213 10:33:03.915153  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:03.915513  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:03.915572  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:04.414115  353396 type.go:168] "Request Body" body=""
	I1213 10:33:04.414198  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:04.414530  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:04.914186  353396 type.go:168] "Request Body" body=""
	I1213 10:33:04.914260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:04.914545  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:05.414244  353396 type.go:168] "Request Body" body=""
	I1213 10:33:05.414331  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:05.414650  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:05.914534  353396 type.go:168] "Request Body" body=""
	I1213 10:33:05.914636  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:05.915200  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:06.414237  353396 type.go:168] "Request Body" body=""
	I1213 10:33:06.414329  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:06.414755  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:06.414814  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:06.914236  353396 type.go:168] "Request Body" body=""
	I1213 10:33:06.914317  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:06.914747  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:07.414280  353396 type.go:168] "Request Body" body=""
	I1213 10:33:07.414359  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:07.414723  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:07.914678  353396 type.go:168] "Request Body" body=""
	I1213 10:33:07.914764  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:07.915020  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:08.414786  353396 type.go:168] "Request Body" body=""
	I1213 10:33:08.414861  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:08.415237  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:08.415311  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:08.914933  353396 type.go:168] "Request Body" body=""
	I1213 10:33:08.915016  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:08.915363  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:09.415090  353396 type.go:168] "Request Body" body=""
	I1213 10:33:09.415163  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:09.415497  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:09.914209  353396 type.go:168] "Request Body" body=""
	I1213 10:33:09.914284  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:09.914628  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:10.414238  353396 type.go:168] "Request Body" body=""
	I1213 10:33:10.414322  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:10.414661  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:10.914394  353396 type.go:168] "Request Body" body=""
	I1213 10:33:10.914479  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:10.914797  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:10.914865  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:11.414499  353396 type.go:168] "Request Body" body=""
	I1213 10:33:11.414573  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:11.414931  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:11.914532  353396 type.go:168] "Request Body" body=""
	I1213 10:33:11.914611  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:11.914966  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:12.414801  353396 type.go:168] "Request Body" body=""
	I1213 10:33:12.414868  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:12.415171  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:12.915004  353396 type.go:168] "Request Body" body=""
	I1213 10:33:12.915081  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:12.915417  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:12.915470  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:13.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:33:13.414242  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:13.414579  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:13.914271  353396 type.go:168] "Request Body" body=""
	I1213 10:33:13.914343  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:13.914614  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:14.414244  353396 type.go:168] "Request Body" body=""
	I1213 10:33:14.414327  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:14.414733  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:14.914296  353396 type.go:168] "Request Body" body=""
	I1213 10:33:14.914374  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:14.914755  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:15.414445  353396 type.go:168] "Request Body" body=""
	I1213 10:33:15.414516  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:15.414826  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:15.414874  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:15.914238  353396 type.go:168] "Request Body" body=""
	I1213 10:33:15.914315  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:15.914667  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:16.414229  353396 type.go:168] "Request Body" body=""
	I1213 10:33:16.414308  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:16.414633  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:16.914167  353396 type.go:168] "Request Body" body=""
	I1213 10:33:16.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:16.914576  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:17.414281  353396 type.go:168] "Request Body" body=""
	I1213 10:33:17.414356  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:17.414750  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:17.914808  353396 type.go:168] "Request Body" body=""
	I1213 10:33:17.914886  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:17.915216  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:17.915272  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:18.414973  353396 type.go:168] "Request Body" body=""
	I1213 10:33:18.415047  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:18.415307  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:18.915151  353396 type.go:168] "Request Body" body=""
	I1213 10:33:18.915226  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:18.915625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:19.414335  353396 type.go:168] "Request Body" body=""
	I1213 10:33:19.414419  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:19.414759  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:19.914166  353396 type.go:168] "Request Body" body=""
	I1213 10:33:19.914245  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:19.914568  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:20.414186  353396 type.go:168] "Request Body" body=""
	I1213 10:33:20.414272  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:20.414597  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:20.414654  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:20.914206  353396 type.go:168] "Request Body" body=""
	I1213 10:33:20.914282  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:20.914621  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:21.414132  353396 type.go:168] "Request Body" body=""
	I1213 10:33:21.414208  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:21.414499  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:21.914197  353396 type.go:168] "Request Body" body=""
	I1213 10:33:21.914272  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:21.914622  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:22.414213  353396 type.go:168] "Request Body" body=""
	I1213 10:33:22.414288  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:22.414631  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:22.414714  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:22.914531  353396 type.go:168] "Request Body" body=""
	I1213 10:33:22.914600  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:22.914881  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:23.414582  353396 type.go:168] "Request Body" body=""
	I1213 10:33:23.414669  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:23.415069  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:23.914895  353396 type.go:168] "Request Body" body=""
	I1213 10:33:23.914973  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:23.915336  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:24.415103  353396 type.go:168] "Request Body" body=""
	I1213 10:33:24.415180  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:24.415512  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:24.415578  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:24.914219  353396 type.go:168] "Request Body" body=""
	I1213 10:33:24.914295  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:24.914635  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:25.414345  353396 type.go:168] "Request Body" body=""
	I1213 10:33:25.414421  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:25.414761  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:25.914311  353396 type.go:168] "Request Body" body=""
	I1213 10:33:25.914420  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:25.914777  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:26.414223  353396 type.go:168] "Request Body" body=""
	I1213 10:33:26.414296  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:26.414589  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:26.914215  353396 type.go:168] "Request Body" body=""
	I1213 10:33:26.914296  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:26.914594  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:26.914643  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:27.414265  353396 type.go:168] "Request Body" body=""
	I1213 10:33:27.414341  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:27.414652  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:27.914800  353396 type.go:168] "Request Body" body=""
	I1213 10:33:27.914879  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:27.915203  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:28.415013  353396 type.go:168] "Request Body" body=""
	I1213 10:33:28.415091  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:28.415415  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:28.914142  353396 type.go:168] "Request Body" body=""
	I1213 10:33:28.914234  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:28.914563  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:29.415186  353396 type.go:168] "Request Body" body=""
	I1213 10:33:29.415270  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:29.415654  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:29.415711  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:29.914406  353396 type.go:168] "Request Body" body=""
	I1213 10:33:29.914489  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:29.914881  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:30.414106  353396 type.go:168] "Request Body" body=""
	I1213 10:33:30.414182  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:30.414441  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:30.914192  353396 type.go:168] "Request Body" body=""
	I1213 10:33:30.914277  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:30.914639  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:31.414415  353396 type.go:168] "Request Body" body=""
	I1213 10:33:31.414504  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:31.414860  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:31.914558  353396 type.go:168] "Request Body" body=""
	I1213 10:33:31.914652  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:31.915115  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:31.915181  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:32.414968  353396 type.go:168] "Request Body" body=""
	I1213 10:33:32.415066  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:32.415412  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:32.914133  353396 type.go:168] "Request Body" body=""
	I1213 10:33:32.914209  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:32.914503  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:33.414182  353396 type.go:168] "Request Body" body=""
	I1213 10:33:33.414252  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:33.414521  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:33.914275  353396 type.go:168] "Request Body" body=""
	I1213 10:33:33.914356  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:33.914658  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:34.414225  353396 type.go:168] "Request Body" body=""
	I1213 10:33:34.414306  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:34.414645  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:34.414731  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:34.915069  353396 type.go:168] "Request Body" body=""
	I1213 10:33:34.915139  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:34.915398  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:35.415186  353396 type.go:168] "Request Body" body=""
	I1213 10:33:35.415276  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:35.415605  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:35.914324  353396 type.go:168] "Request Body" body=""
	I1213 10:33:35.914403  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:35.914770  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:36.414183  353396 type.go:168] "Request Body" body=""
	I1213 10:33:36.414258  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:36.414589  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:36.914281  353396 type.go:168] "Request Body" body=""
	I1213 10:33:36.914356  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:36.914674  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:36.914747  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:37.414252  353396 type.go:168] "Request Body" body=""
	I1213 10:33:37.414329  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:37.414672  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:37.914724  353396 type.go:168] "Request Body" body=""
	I1213 10:33:37.914810  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:37.915057  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:38.414913  353396 type.go:168] "Request Body" body=""
	I1213 10:33:38.414995  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:38.415346  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:38.915031  353396 type.go:168] "Request Body" body=""
	I1213 10:33:38.915152  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:38.915474  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:38.915537  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:39.414149  353396 type.go:168] "Request Body" body=""
	I1213 10:33:39.414223  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:39.414489  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:39.914229  353396 type.go:168] "Request Body" body=""
	I1213 10:33:39.914316  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:39.914708  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:40.414421  353396 type.go:168] "Request Body" body=""
	I1213 10:33:40.414505  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:40.414841  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:40.914176  353396 type.go:168] "Request Body" body=""
	I1213 10:33:40.914251  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:40.914547  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:41.414225  353396 type.go:168] "Request Body" body=""
	I1213 10:33:41.414301  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:41.414638  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:41.414716  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:41.914232  353396 type.go:168] "Request Body" body=""
	I1213 10:33:41.914312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:41.914726  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:42.414413  353396 type.go:168] "Request Body" body=""
	I1213 10:33:42.414502  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:42.414788  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:42.914738  353396 type.go:168] "Request Body" body=""
	I1213 10:33:42.914819  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:42.915151  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:43.414956  353396 type.go:168] "Request Body" body=""
	I1213 10:33:43.415050  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:43.415390  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:43.415447  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:43.914096  353396 type.go:168] "Request Body" body=""
	I1213 10:33:43.914175  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:43.914452  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:44.414189  353396 type.go:168] "Request Body" body=""
	I1213 10:33:44.414299  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:44.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:44.914233  353396 type.go:168] "Request Body" body=""
	I1213 10:33:44.914313  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:44.914681  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:45.414148  353396 type.go:168] "Request Body" body=""
	I1213 10:33:45.414250  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:45.414576  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:45.914408  353396 type.go:168] "Request Body" body=""
	I1213 10:33:45.914483  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:45.914847  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:45.914902  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:46.414598  353396 type.go:168] "Request Body" body=""
	I1213 10:33:46.414675  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:46.415085  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:46.914922  353396 type.go:168] "Request Body" body=""
	I1213 10:33:46.915000  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:46.915300  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:47.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:33:47.414249  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:47.414598  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:47.914753  353396 type.go:168] "Request Body" body=""
	I1213 10:33:47.914829  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:47.915132  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:47.915181  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:48.414845  353396 type.go:168] "Request Body" body=""
	I1213 10:33:48.414950  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:48.415268  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:48.914972  353396 type.go:168] "Request Body" body=""
	I1213 10:33:48.915042  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:48.915396  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:49.415067  353396 type.go:168] "Request Body" body=""
	I1213 10:33:49.415147  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:49.415484  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:49.914172  353396 type.go:168] "Request Body" body=""
	I1213 10:33:49.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:49.914579  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:50.414275  353396 type.go:168] "Request Body" body=""
	I1213 10:33:50.414359  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:50.414679  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:50.414752  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:50.914234  353396 type.go:168] "Request Body" body=""
	I1213 10:33:50.914312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:50.914673  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:51.414220  353396 type.go:168] "Request Body" body=""
	I1213 10:33:51.414286  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:51.414593  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:51.914230  353396 type.go:168] "Request Body" body=""
	I1213 10:33:51.914311  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:51.914660  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:52.414409  353396 type.go:168] "Request Body" body=""
	I1213 10:33:52.414499  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:52.414831  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:52.414892  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:52.914704  353396 type.go:168] "Request Body" body=""
	I1213 10:33:52.914782  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:52.915049  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:53.414824  353396 type.go:168] "Request Body" body=""
	I1213 10:33:53.414900  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:53.415223  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:53.915049  353396 type.go:168] "Request Body" body=""
	I1213 10:33:53.915127  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:53.915475  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:54.415020  353396 type.go:168] "Request Body" body=""
	I1213 10:33:54.415131  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:54.415393  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:54.415434  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:54.914119  353396 type.go:168] "Request Body" body=""
	I1213 10:33:54.914214  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:54.914516  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:55.414234  353396 type.go:168] "Request Body" body=""
	I1213 10:33:55.414310  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:55.414632  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:55.914184  353396 type.go:168] "Request Body" body=""
	I1213 10:33:55.914266  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:55.914529  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:56.414289  353396 type.go:168] "Request Body" body=""
	I1213 10:33:56.414370  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:56.414757  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:56.914479  353396 type.go:168] "Request Body" body=""
	I1213 10:33:56.914560  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:56.914914  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:56.914974  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:57.414182  353396 type.go:168] "Request Body" body=""
	I1213 10:33:57.414256  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:57.414574  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:57.914733  353396 type.go:168] "Request Body" body=""
	I1213 10:33:57.914817  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:57.915173  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:58.414963  353396 type.go:168] "Request Body" body=""
	I1213 10:33:58.415038  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:58.415384  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:58.915098  353396 type.go:168] "Request Body" body=""
	I1213 10:33:58.915166  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:58.915457  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:58.915498  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:59.414217  353396 type.go:168] "Request Body" body=""
	I1213 10:33:59.414293  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:59.414619  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:59.914358  353396 type.go:168] "Request Body" body=""
	I1213 10:33:59.914442  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:59.914849  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:00.414213  353396 type.go:168] "Request Body" body=""
	I1213 10:34:00.414339  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:00.414709  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:00.914229  353396 type.go:168] "Request Body" body=""
	I1213 10:34:00.914306  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:00.914634  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:01.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:34:01.414315  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:01.414624  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:01.414672  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:01.914168  353396 type.go:168] "Request Body" body=""
	I1213 10:34:01.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:01.914585  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:02.414240  353396 type.go:168] "Request Body" body=""
	I1213 10:34:02.414320  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:02.414671  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:02.914495  353396 type.go:168] "Request Body" body=""
	I1213 10:34:02.914572  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:02.914905  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:03.414563  353396 type.go:168] "Request Body" body=""
	I1213 10:34:03.414642  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:03.414937  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:03.414981  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:03.914802  353396 type.go:168] "Request Body" body=""
	I1213 10:34:03.914886  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:03.915200  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:04.415061  353396 type.go:168] "Request Body" body=""
	I1213 10:34:04.415173  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:04.415604  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:04.915045  353396 type.go:168] "Request Body" body=""
	I1213 10:34:04.915117  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:04.915454  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:05.414181  353396 type.go:168] "Request Body" body=""
	I1213 10:34:05.414260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:05.414598  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:05.914312  353396 type.go:168] "Request Body" body=""
	I1213 10:34:05.914397  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:05.914761  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:05.914818  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:06.414172  353396 type.go:168] "Request Body" body=""
	I1213 10:34:06.414246  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:06.414578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:06.914215  353396 type.go:168] "Request Body" body=""
	I1213 10:34:06.914294  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:06.914638  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:07.414373  353396 type.go:168] "Request Body" body=""
	I1213 10:34:07.414449  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:07.414801  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:07.914926  353396 type.go:168] "Request Body" body=""
	I1213 10:34:07.914993  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:07.915307  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:07.915360  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:08.415127  353396 type.go:168] "Request Body" body=""
	I1213 10:34:08.415205  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:08.415596  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:08.914374  353396 type.go:168] "Request Body" body=""
	I1213 10:34:08.914456  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:08.914801  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:09.414148  353396 type.go:168] "Request Body" body=""
	I1213 10:34:09.414219  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:09.414479  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:09.914227  353396 type.go:168] "Request Body" body=""
	I1213 10:34:09.914306  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:09.914661  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:10.414240  353396 type.go:168] "Request Body" body=""
	I1213 10:34:10.414319  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:10.414680  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:10.414778  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:10.918812  353396 type.go:168] "Request Body" body=""
	I1213 10:34:10.918890  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:10.919160  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:11.415030  353396 type.go:168] "Request Body" body=""
	I1213 10:34:11.415107  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:11.415436  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:11.914150  353396 type.go:168] "Request Body" body=""
	I1213 10:34:11.914232  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:11.914571  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:12.415071  353396 type.go:168] "Request Body" body=""
	I1213 10:34:12.415146  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:12.415421  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:12.415479  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:12.914213  353396 type.go:168] "Request Body" body=""
	I1213 10:34:12.914288  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:12.914622  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:13.414338  353396 type.go:168] "Request Body" body=""
	I1213 10:34:13.414421  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:13.414784  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:13.914194  353396 type.go:168] "Request Body" body=""
	I1213 10:34:13.914270  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:13.914538  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:14.414217  353396 type.go:168] "Request Body" body=""
	I1213 10:34:14.414294  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:14.414624  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:14.914210  353396 type.go:168] "Request Body" body=""
	I1213 10:34:14.914290  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:14.914590  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:14.914639  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:15.414121  353396 type.go:168] "Request Body" body=""
	I1213 10:34:15.414260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:15.414569  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:15.914203  353396 type.go:168] "Request Body" body=""
	I1213 10:34:15.914284  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:15.914613  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:16.414225  353396 type.go:168] "Request Body" body=""
	I1213 10:34:16.414308  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:16.414648  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:16.914359  353396 type.go:168] "Request Body" body=""
	I1213 10:34:16.914447  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:16.914753  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:16.914798  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:17.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:34:17.414312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:17.414646  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:17.914569  353396 type.go:168] "Request Body" body=""
	I1213 10:34:17.914646  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:17.914997  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:18.414794  353396 type.go:168] "Request Body" body=""
	I1213 10:34:18.414864  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:18.415130  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:18.914878  353396 type.go:168] "Request Body" body=""
	I1213 10:34:18.914956  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:18.915256  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:18.915309  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:19.415048  353396 type.go:168] "Request Body" body=""
	I1213 10:34:19.415124  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:19.415473  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:19.914155  353396 type.go:168] "Request Body" body=""
	I1213 10:34:19.914239  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:19.914557  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:20.414216  353396 type.go:168] "Request Body" body=""
	I1213 10:34:20.414293  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:20.414595  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:20.914298  353396 type.go:168] "Request Body" body=""
	I1213 10:34:20.914378  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:20.914742  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:21.414175  353396 type.go:168] "Request Body" body=""
	I1213 10:34:21.414247  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:21.414574  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:21.414628  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:21.914278  353396 type.go:168] "Request Body" body=""
	I1213 10:34:21.914361  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:21.914745  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:22.414284  353396 type.go:168] "Request Body" body=""
	I1213 10:34:22.414361  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:22.414747  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:22.914549  353396 type.go:168] "Request Body" body=""
	I1213 10:34:22.914626  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:22.914988  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:23.414779  353396 type.go:168] "Request Body" body=""
	I1213 10:34:23.414855  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:23.415214  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:23.415277  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:23.915088  353396 type.go:168] "Request Body" body=""
	I1213 10:34:23.915170  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:23.915507  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:24.414168  353396 type.go:168] "Request Body" body=""
	I1213 10:34:24.414241  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:24.414497  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:24.914176  353396 type.go:168] "Request Body" body=""
	I1213 10:34:24.914250  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:24.914580  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:25.414317  353396 type.go:168] "Request Body" body=""
	I1213 10:34:25.414397  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:25.414758  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:25.914443  353396 type.go:168] "Request Body" body=""
	I1213 10:34:25.914516  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:25.914878  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:25.914936  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:26.414193  353396 type.go:168] "Request Body" body=""
	I1213 10:34:26.414269  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:26.414575  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:26.914218  353396 type.go:168] "Request Body" body=""
	I1213 10:34:26.914293  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:26.914611  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:27.414157  353396 type.go:168] "Request Body" body=""
	I1213 10:34:27.414224  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:27.414475  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:27.914651  353396 type.go:168] "Request Body" body=""
	I1213 10:34:27.914747  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:27.915082  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:27.915143  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:28.414747  353396 type.go:168] "Request Body" body=""
	I1213 10:34:28.414831  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:28.415166  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:28.914918  353396 type.go:168] "Request Body" body=""
	I1213 10:34:28.914994  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:28.915317  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:29.415099  353396 type.go:168] "Request Body" body=""
	I1213 10:34:29.415182  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:29.415527  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:29.914143  353396 type.go:168] "Request Body" body=""
	I1213 10:34:29.914235  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:29.914632  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:30.414347  353396 type.go:168] "Request Body" body=""
	I1213 10:34:30.414415  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:30.414708  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:30.414755  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:30.914237  353396 type.go:168] "Request Body" body=""
	I1213 10:34:30.914320  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:30.914657  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:31.414414  353396 type.go:168] "Request Body" body=""
	I1213 10:34:31.414503  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:31.414889  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:31.914157  353396 type.go:168] "Request Body" body=""
	I1213 10:34:31.914230  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:31.914496  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:32.414209  353396 type.go:168] "Request Body" body=""
	I1213 10:34:32.414292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:32.414648  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:32.914128  353396 type.go:168] "Request Body" body=""
	I1213 10:34:32.914211  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:32.914560  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:32.914616  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:33.414256  353396 type.go:168] "Request Body" body=""
	I1213 10:34:33.414326  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:33.414617  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:33.914297  353396 type.go:168] "Request Body" body=""
	I1213 10:34:33.914377  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:33.914762  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:34.414238  353396 type.go:168] "Request Body" body=""
	I1213 10:34:34.414315  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:34.414643  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:34.914151  353396 type.go:168] "Request Body" body=""
	I1213 10:34:34.914224  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:34.914486  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:35.414223  353396 type.go:168] "Request Body" body=""
	I1213 10:34:35.414304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:35.414642  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:35.414735  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:35.914235  353396 type.go:168] "Request Body" body=""
	I1213 10:34:35.914320  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:35.914658  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:36.414261  353396 type.go:168] "Request Body" body=""
	I1213 10:34:36.414332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:36.414605  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:36.914211  353396 type.go:168] "Request Body" body=""
	I1213 10:34:36.914285  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:36.914640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:37.414211  353396 type.go:168] "Request Body" body=""
	I1213 10:34:37.414289  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:37.414584  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:37.914675  353396 type.go:168] "Request Body" body=""
	I1213 10:34:37.914757  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:37.915023  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:37.915064  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:38.414903  353396 type.go:168] "Request Body" body=""
	I1213 10:34:38.414986  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:38.415396  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:38.914137  353396 type.go:168] "Request Body" body=""
	I1213 10:34:38.914223  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:38.914580  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:39.414172  353396 type.go:168] "Request Body" body=""
	I1213 10:34:39.414253  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:39.414582  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:39.914286  353396 type.go:168] "Request Body" body=""
	I1213 10:34:39.914363  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:39.914715  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:40.414232  353396 type.go:168] "Request Body" body=""
	I1213 10:34:40.414314  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:40.414677  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:40.414753  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:40.914094  353396 type.go:168] "Request Body" body=""
	I1213 10:34:40.914175  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:40.914491  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:41.414243  353396 type.go:168] "Request Body" body=""
	I1213 10:34:41.414321  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:41.414666  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:41.914412  353396 type.go:168] "Request Body" body=""
	I1213 10:34:41.914495  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:41.914870  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:42.414297  353396 type.go:168] "Request Body" body=""
	I1213 10:34:42.414371  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:42.414633  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:42.914585  353396 type.go:168] "Request Body" body=""
	I1213 10:34:42.914668  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:42.915024  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:42.915079  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:43.414607  353396 type.go:168] "Request Body" body=""
	I1213 10:34:43.414702  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:43.415071  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:43.914792  353396 type.go:168] "Request Body" body=""
	I1213 10:34:43.914869  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:43.915208  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:44.415017  353396 type.go:168] "Request Body" body=""
	I1213 10:34:44.415093  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:44.415470  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:44.915253  353396 type.go:168] "Request Body" body=""
	I1213 10:34:44.915329  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:44.915668  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:44.915722  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:45.414372  353396 type.go:168] "Request Body" body=""
	I1213 10:34:45.414449  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:45.414746  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:45.914236  353396 type.go:168] "Request Body" body=""
	I1213 10:34:45.914316  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:45.914655  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:46.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:34:46.414322  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:46.414658  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:46.915158  353396 type.go:168] "Request Body" body=""
	I1213 10:34:46.915231  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:46.915495  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:47.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:34:47.414242  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:47.414552  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:47.414603  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:47.914533  353396 type.go:168] "Request Body" body=""
	I1213 10:34:47.914615  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:47.914992  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:48.414726  353396 type.go:168] "Request Body" body=""
	I1213 10:34:48.414795  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:48.415059  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:48.914847  353396 type.go:168] "Request Body" body=""
	I1213 10:34:48.914935  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:48.915268  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:49.415068  353396 type.go:168] "Request Body" body=""
	I1213 10:34:49.415159  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:49.415526  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:49.415582  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:49.914165  353396 type.go:168] "Request Body" body=""
	I1213 10:34:49.914239  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:49.914499  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:50.414183  353396 type.go:168] "Request Body" body=""
	I1213 10:34:50.414258  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:50.414554  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:50.914233  353396 type.go:168] "Request Body" body=""
	I1213 10:34:50.914307  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:50.914623  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:51.414141  353396 type.go:168] "Request Body" body=""
	I1213 10:34:51.414231  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:51.414525  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:51.914233  353396 type.go:168] "Request Body" body=""
	I1213 10:34:51.914311  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:51.914675  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:51.914750  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:52.414266  353396 type.go:168] "Request Body" body=""
	I1213 10:34:52.414347  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:52.414711  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:52.914454  353396 type.go:168] "Request Body" body=""
	I1213 10:34:52.914525  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:52.914819  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:53.414527  353396 type.go:168] "Request Body" body=""
	I1213 10:34:53.414603  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:53.414939  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:53.914755  353396 type.go:168] "Request Body" body=""
	I1213 10:34:53.914832  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:53.915171  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:53.915227  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:54.414953  353396 type.go:168] "Request Body" body=""
	I1213 10:34:54.415021  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:54.415337  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:54.915118  353396 type.go:168] "Request Body" body=""
	I1213 10:34:54.915194  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:54.915521  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:55.414227  353396 type.go:168] "Request Body" body=""
	I1213 10:34:55.414310  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:55.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:55.914335  353396 type.go:168] "Request Body" body=""
	I1213 10:34:55.914406  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:55.914763  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:56.414221  353396 type.go:168] "Request Body" body=""
	I1213 10:34:56.414295  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:56.414641  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:56.414726  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:56.914215  353396 type.go:168] "Request Body" body=""
	I1213 10:34:56.914290  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:56.914634  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:57.415117  353396 type.go:168] "Request Body" body=""
	I1213 10:34:57.415188  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:57.415448  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:57.914627  353396 type.go:168] "Request Body" body=""
	I1213 10:34:57.914722  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:57.915055  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:58.414842  353396 type.go:168] "Request Body" body=""
	I1213 10:34:58.414915  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:58.415239  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:58.415298  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:58.915010  353396 type.go:168] "Request Body" body=""
	I1213 10:34:58.915077  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:58.915339  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:59.414106  353396 type.go:168] "Request Body" body=""
	I1213 10:34:59.414182  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:59.414535  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:59.914218  353396 type.go:168] "Request Body" body=""
	I1213 10:34:59.914297  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:59.914630  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:00.414234  353396 type.go:168] "Request Body" body=""
	I1213 10:35:00.414329  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:00.414642  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:00.914218  353396 type.go:168] "Request Body" body=""
	I1213 10:35:00.914294  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:00.914620  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:00.914708  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:01.414284  353396 type.go:168] "Request Body" body=""
	I1213 10:35:01.414392  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:01.414774  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:01.914191  353396 type.go:168] "Request Body" body=""
	I1213 10:35:01.914265  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:01.914561  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:02.414257  353396 type.go:168] "Request Body" body=""
	I1213 10:35:02.414340  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:02.414636  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:02.914575  353396 type.go:168] "Request Body" body=""
	I1213 10:35:02.914650  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:02.914985  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:02.915031  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:03.414733  353396 type.go:168] "Request Body" body=""
	I1213 10:35:03.414804  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:03.415061  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:03.914909  353396 type.go:168] "Request Body" body=""
	I1213 10:35:03.914993  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:03.915318  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:04.415148  353396 type.go:168] "Request Body" body=""
	I1213 10:35:04.415227  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:04.415569  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:04.914256  353396 type.go:168] "Request Body" body=""
	I1213 10:35:04.914332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:04.914597  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:05.414227  353396 type.go:168] "Request Body" body=""
	I1213 10:35:05.414308  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:05.414640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:05.414721  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:05.914245  353396 type.go:168] "Request Body" body=""
	I1213 10:35:05.914320  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:05.914676  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:06.414484  353396 type.go:168] "Request Body" body=""
	I1213 10:35:06.414568  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:06.415045  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:06.914814  353396 type.go:168] "Request Body" body=""
	I1213 10:35:06.914901  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:06.915246  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:07.415065  353396 type.go:168] "Request Body" body=""
	I1213 10:35:07.415153  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:07.415494  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:07.415553  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:07.914641  353396 type.go:168] "Request Body" body=""
	I1213 10:35:07.914776  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:07.915128  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:08.414792  353396 type.go:168] "Request Body" body=""
	I1213 10:35:08.414868  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:08.415229  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:08.914906  353396 type.go:168] "Request Body" body=""
	I1213 10:35:08.914987  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:08.915375  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:09.415114  353396 type.go:168] "Request Body" body=""
	I1213 10:35:09.415185  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:09.415534  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:09.415626  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:09.914398  353396 type.go:168] "Request Body" body=""
	I1213 10:35:09.914476  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:09.914888  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:10.414634  353396 type.go:168] "Request Body" body=""
	I1213 10:35:10.414730  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:10.415080  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:10.914849  353396 type.go:168] "Request Body" body=""
	I1213 10:35:10.914926  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:10.915192  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:11.414986  353396 type.go:168] "Request Body" body=""
	I1213 10:35:11.415062  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:11.415419  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:11.915136  353396 type.go:168] "Request Body" body=""
	I1213 10:35:11.915218  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:11.915577  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:11.915629  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:12.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:35:12.414245  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:12.414563  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:12.914542  353396 type.go:168] "Request Body" body=""
	I1213 10:35:12.914628  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:12.914969  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:13.414794  353396 type.go:168] "Request Body" body=""
	I1213 10:35:13.414874  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:13.415199  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:13.914951  353396 type.go:168] "Request Body" body=""
	I1213 10:35:13.915028  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:13.915309  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:14.415142  353396 type.go:168] "Request Body" body=""
	I1213 10:35:14.415220  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:14.415591  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:14.415644  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:14.914209  353396 type.go:168] "Request Body" body=""
	I1213 10:35:14.914291  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:14.914640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:15.414142  353396 type.go:168] "Request Body" body=""
	I1213 10:35:15.414223  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:15.414500  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:15.914207  353396 type.go:168] "Request Body" body=""
	I1213 10:35:15.914282  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:15.914614  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:16.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:35:16.414321  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:16.414682  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:16.914393  353396 type.go:168] "Request Body" body=""
	I1213 10:35:16.914470  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:16.914765  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:16.914810  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:17.414477  353396 type.go:168] "Request Body" body=""
	I1213 10:35:17.414566  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:17.414955  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:17.914879  353396 type.go:168] "Request Body" body=""
	I1213 10:35:17.914965  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:17.915283  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:18.414960  353396 type.go:168] "Request Body" body=""
	I1213 10:35:18.415027  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:18.415288  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:18.915145  353396 type.go:168] "Request Body" body=""
	I1213 10:35:18.915219  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:18.915532  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:18.915589  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:19.414267  353396 type.go:168] "Request Body" body=""
	I1213 10:35:19.414349  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:19.414667  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:19.914171  353396 type.go:168] "Request Body" body=""
	I1213 10:35:19.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:19.914602  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:20.414244  353396 type.go:168] "Request Body" body=""
	I1213 10:35:20.414319  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:20.414679  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:20.914267  353396 type.go:168] "Request Body" body=""
	I1213 10:35:20.914345  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:20.914701  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:21.414378  353396 type.go:168] "Request Body" body=""
	I1213 10:35:21.414447  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:21.414730  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:21.414775  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:21.914224  353396 type.go:168] "Request Body" body=""
	I1213 10:35:21.914303  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:21.914653  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:22.414385  353396 type.go:168] "Request Body" body=""
	I1213 10:35:22.414469  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:22.414833  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:22.914649  353396 type.go:168] "Request Body" body=""
	I1213 10:35:22.914736  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:22.915061  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:23.414837  353396 type.go:168] "Request Body" body=""
	I1213 10:35:23.414918  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:23.415270  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:23.415331  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:23.915146  353396 type.go:168] "Request Body" body=""
	I1213 10:35:23.915249  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:23.915638  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:24.414154  353396 type.go:168] "Request Body" body=""
	I1213 10:35:24.414230  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:24.414552  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:24.914229  353396 type.go:168] "Request Body" body=""
	I1213 10:35:24.914318  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:24.914653  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:25.414252  353396 type.go:168] "Request Body" body=""
	I1213 10:35:25.414330  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:25.414660  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:25.915091  353396 type.go:168] "Request Body" body=""
	I1213 10:35:25.915163  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:25.915423  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:25.915467  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:26.414123  353396 type.go:168] "Request Body" body=""
	I1213 10:35:26.414207  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:26.414558  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:26.914267  353396 type.go:168] "Request Body" body=""
	I1213 10:35:26.914346  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:26.914677  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:27.414161  353396 type.go:168] "Request Body" body=""
	I1213 10:35:27.414230  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:27.414491  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:27.914568  353396 type.go:168] "Request Body" body=""
	I1213 10:35:27.914650  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:27.914976  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:28.414794  353396 type.go:168] "Request Body" body=""
	I1213 10:35:28.414873  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:28.415186  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:28.415239  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:28.914787  353396 type.go:168] "Request Body" body=""
	I1213 10:35:28.914856  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:28.915120  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:29.414926  353396 type.go:168] "Request Body" body=""
	I1213 10:35:29.415009  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:29.415380  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:29.915155  353396 type.go:168] "Request Body" body=""
	I1213 10:35:29.915232  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:29.915572  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:30.414139  353396 type.go:168] "Request Body" body=""
	I1213 10:35:30.414216  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:30.414501  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:30.914198  353396 type.go:168] "Request Body" body=""
	I1213 10:35:30.914283  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:30.914624  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:30.914682  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:31.414209  353396 type.go:168] "Request Body" body=""
	I1213 10:35:31.414294  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:31.414641  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:31.915066  353396 type.go:168] "Request Body" body=""
	I1213 10:35:31.915136  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:31.915397  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:32.415120  353396 type.go:168] "Request Body" body=""
	I1213 10:35:32.415192  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:32.415523  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:32.914353  353396 type.go:168] "Request Body" body=""
	I1213 10:35:32.914437  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:32.914779  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:32.914844  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:33.415110  353396 type.go:168] "Request Body" body=""
	I1213 10:35:33.415191  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:33.415482  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:33.914210  353396 type.go:168] "Request Body" body=""
	I1213 10:35:33.914292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:33.914627  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:34.414260  353396 type.go:168] "Request Body" body=""
	I1213 10:35:34.414342  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:34.414742  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:34.914167  353396 type.go:168] "Request Body" body=""
	I1213 10:35:34.914242  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:34.914556  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:35.414226  353396 type.go:168] "Request Body" body=""
	I1213 10:35:35.414313  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:35.414666  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:35.414752  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:35.914420  353396 type.go:168] "Request Body" body=""
	I1213 10:35:35.914498  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:35.914834  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:36.414154  353396 type.go:168] "Request Body" body=""
	I1213 10:35:36.414226  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:36.414499  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:36.914218  353396 type.go:168] "Request Body" body=""
	I1213 10:35:36.914304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:36.914676  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:37.414405  353396 type.go:168] "Request Body" body=""
	I1213 10:35:37.414482  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:37.414832  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:37.414887  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:37.914713  353396 type.go:168] "Request Body" body=""
	I1213 10:35:37.914786  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:37.915049  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:38.414865  353396 type.go:168] "Request Body" body=""
	I1213 10:35:38.414946  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:38.415313  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:38.915124  353396 type.go:168] "Request Body" body=""
	I1213 10:35:38.915206  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:38.915515  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:39.414199  353396 type.go:168] "Request Body" body=""
	I1213 10:35:39.414277  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:39.414637  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:39.914230  353396 type.go:168] "Request Body" body=""
	I1213 10:35:39.914312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:39.914640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:39.914716  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:40.414236  353396 type.go:168] "Request Body" body=""
	I1213 10:35:40.414320  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:40.414647  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:40.914170  353396 type.go:168] "Request Body" body=""
	I1213 10:35:40.914247  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:40.914571  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:41.414273  353396 type.go:168] "Request Body" body=""
	I1213 10:35:41.414349  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:41.414716  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:41.914438  353396 type.go:168] "Request Body" body=""
	I1213 10:35:41.914515  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:41.914837  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:41.914886  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:42.414105  353396 type.go:168] "Request Body" body=""
	I1213 10:35:42.414188  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:42.414457  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:42.914545  353396 type.go:168] "Request Body" body=""
	I1213 10:35:42.914625  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:42.914994  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:43.414794  353396 type.go:168] "Request Body" body=""
	I1213 10:35:43.414871  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:43.415204  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:43.914954  353396 type.go:168] "Request Body" body=""
	I1213 10:35:43.915028  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:43.915294  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:43.915335  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:44.415170  353396 type.go:168] "Request Body" body=""
	I1213 10:35:44.415252  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:44.415625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:44.914230  353396 type.go:168] "Request Body" body=""
	I1213 10:35:44.914311  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:44.914638  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:45.414189  353396 type.go:168] "Request Body" body=""
	I1213 10:35:45.414273  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:45.414545  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:45.914206  353396 type.go:168] "Request Body" body=""
	I1213 10:35:45.914285  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:45.914623  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:46.414264  353396 type.go:168] "Request Body" body=""
	I1213 10:35:46.414341  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:46.414706  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:46.414761  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:46.914394  353396 type.go:168] "Request Body" body=""
	I1213 10:35:46.914496  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:46.914842  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:47.414246  353396 type.go:168] "Request Body" body=""
	I1213 10:35:47.414321  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:47.414636  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:47.914823  353396 type.go:168] "Request Body" body=""
	I1213 10:35:47.914900  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:47.915205  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:48.414980  353396 type.go:168] "Request Body" body=""
	I1213 10:35:48.415049  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:48.415356  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:48.415416  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:48.915139  353396 type.go:168] "Request Body" body=""
	I1213 10:35:48.915222  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:48.915541  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:49.414295  353396 type.go:168] "Request Body" body=""
	I1213 10:35:49.414372  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:49.414675  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:49.914178  353396 type.go:168] "Request Body" body=""
	I1213 10:35:49.914246  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:49.914565  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:50.414236  353396 type.go:168] "Request Body" body=""
	I1213 10:35:50.414322  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:50.414646  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:50.914236  353396 type.go:168] "Request Body" body=""
	I1213 10:35:50.914312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:50.914633  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:50.914705  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:51.414174  353396 type.go:168] "Request Body" body=""
	I1213 10:35:51.414251  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:51.414515  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:51.914227  353396 type.go:168] "Request Body" body=""
	I1213 10:35:51.914303  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:51.914621  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:52.414226  353396 type.go:168] "Request Body" body=""
	I1213 10:35:52.414312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:52.414660  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:52.914528  353396 type.go:168] "Request Body" body=""
	I1213 10:35:52.914597  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:52.914892  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:52.914936  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:53.414626  353396 type.go:168] "Request Body" body=""
	I1213 10:35:53.414743  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:53.415155  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:53.914985  353396 type.go:168] "Request Body" body=""
	I1213 10:35:53.915060  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:53.915423  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:54.414132  353396 type.go:168] "Request Body" body=""
	I1213 10:35:54.414212  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:54.414538  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:54.914221  353396 type.go:168] "Request Body" body=""
	I1213 10:35:54.914300  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:54.914639  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:55.414361  353396 type.go:168] "Request Body" body=""
	I1213 10:35:55.414442  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:55.414760  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:55.414814  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:55.914153  353396 type.go:168] "Request Body" body=""
	I1213 10:35:55.914231  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:55.914493  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:56.414257  353396 type.go:168] "Request Body" body=""
	I1213 10:35:56.414339  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:56.414657  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:56.914216  353396 type.go:168] "Request Body" body=""
	I1213 10:35:56.914293  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:56.914667  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:57.414176  353396 type.go:168] "Request Body" body=""
	I1213 10:35:57.414254  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:57.414584  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:57.914966  353396 type.go:168] "Request Body" body=""
	I1213 10:35:57.915050  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:57.915391  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:57.915453  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:58.414132  353396 type.go:168] "Request Body" body=""
	I1213 10:35:58.414215  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:58.414528  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:58.914158  353396 type.go:168] "Request Body" body=""
	I1213 10:35:58.914236  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:58.914510  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:59.414124  353396 type.go:168] "Request Body" body=""
	I1213 10:35:59.414208  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:59.414536  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:59.914263  353396 type.go:168] "Request Body" body=""
	I1213 10:35:59.914349  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:59.914758  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:00.421144  353396 type.go:168] "Request Body" body=""
	I1213 10:36:00.421250  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:00.421612  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:00.421665  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:00.914230  353396 type.go:168] "Request Body" body=""
	I1213 10:36:00.914305  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:00.914644  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:01.414215  353396 type.go:168] "Request Body" body=""
	I1213 10:36:01.414292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:01.414622  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:01.914179  353396 type.go:168] "Request Body" body=""
	I1213 10:36:01.914256  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:01.914522  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:02.414207  353396 type.go:168] "Request Body" body=""
	I1213 10:36:02.414283  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:02.414571  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:02.914503  353396 type.go:168] "Request Body" body=""
	I1213 10:36:02.914581  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:02.914941  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:02.915005  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:03.414758  353396 type.go:168] "Request Body" body=""
	I1213 10:36:03.414829  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:03.415178  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:03.914982  353396 type.go:168] "Request Body" body=""
	I1213 10:36:03.915057  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:03.915402  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:04.415064  353396 type.go:168] "Request Body" body=""
	I1213 10:36:04.415144  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:04.415523  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:04.914219  353396 type.go:168] "Request Body" body=""
	I1213 10:36:04.914298  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:04.914617  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:05.414231  353396 type.go:168] "Request Body" body=""
	I1213 10:36:05.414310  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:05.414671  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:05.414749  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:05.914422  353396 type.go:168] "Request Body" body=""
	I1213 10:36:05.914498  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:05.914864  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:06.414177  353396 type.go:168] "Request Body" body=""
	I1213 10:36:06.414262  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:06.414578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:06.914278  353396 type.go:168] "Request Body" body=""
	I1213 10:36:06.914363  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:06.914742  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:07.414300  353396 type.go:168] "Request Body" body=""
	I1213 10:36:07.414382  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:07.414720  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:07.414787  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:07.914791  353396 type.go:168] "Request Body" body=""
	I1213 10:36:07.914860  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:07.915123  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:08.414897  353396 type.go:168] "Request Body" body=""
	I1213 10:36:08.414981  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:08.415336  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:08.915032  353396 type.go:168] "Request Body" body=""
	I1213 10:36:08.915117  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:08.915466  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:09.414191  353396 type.go:168] "Request Body" body=""
	I1213 10:36:09.414260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:09.414540  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:09.914271  353396 type.go:168] "Request Body" body=""
	I1213 10:36:09.914352  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:09.914675  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:09.914752  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:10.414138  353396 type.go:168] "Request Body" body=""
	I1213 10:36:10.414216  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:10.414557  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:10.914195  353396 type.go:168] "Request Body" body=""
	I1213 10:36:10.914266  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:10.914534  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:11.414263  353396 type.go:168] "Request Body" body=""
	I1213 10:36:11.414339  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:11.414753  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:11.914459  353396 type.go:168] "Request Body" body=""
	I1213 10:36:11.914533  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:11.914890  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:11.914948  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:12.414139  353396 type.go:168] "Request Body" body=""
	I1213 10:36:12.414211  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:12.414474  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:12.914342  353396 type.go:168] "Request Body" body=""
	I1213 10:36:12.914427  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:12.914750  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:13.414215  353396 type.go:168] "Request Body" body=""
	I1213 10:36:13.414295  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:13.414650  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:13.914372  353396 type.go:168] "Request Body" body=""
	I1213 10:36:13.914451  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:13.914752  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:14.414251  353396 type.go:168] "Request Body" body=""
	I1213 10:36:14.414328  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:14.414656  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:14.414721  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:14.914256  353396 type.go:168] "Request Body" body=""
	I1213 10:36:14.914328  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:14.914611  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:15.415149  353396 type.go:168] "Request Body" body=""
	I1213 10:36:15.415221  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:15.415540  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:15.914232  353396 type.go:168] "Request Body" body=""
	I1213 10:36:15.914308  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:15.914678  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:16.414245  353396 type.go:168] "Request Body" body=""
	I1213 10:36:16.414325  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:16.414657  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:16.914285  353396 type.go:168] "Request Body" body=""
	I1213 10:36:16.914367  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:16.914649  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:16.914725  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:17.414244  353396 type.go:168] "Request Body" body=""
	I1213 10:36:17.414333  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:17.414644  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:17.914739  353396 type.go:168] "Request Body" body=""
	I1213 10:36:17.914821  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:17.915139  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:18.414875  353396 type.go:168] "Request Body" body=""
	I1213 10:36:18.414955  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:18.415226  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:18.915006  353396 type.go:168] "Request Body" body=""
	I1213 10:36:18.915082  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:18.915415  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:18.915472  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:19.415096  353396 type.go:168] "Request Body" body=""
	I1213 10:36:19.415183  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:19.415488  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:19.914201  353396 type.go:168] "Request Body" body=""
	I1213 10:36:19.914273  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:19.914619  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:20.414338  353396 type.go:168] "Request Body" body=""
	I1213 10:36:20.414409  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:20.414746  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:20.914260  353396 type.go:168] "Request Body" body=""
	I1213 10:36:20.914335  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:20.914704  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:21.414252  353396 type.go:168] "Request Body" body=""
	I1213 10:36:21.414338  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:21.414656  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:21.414724  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:21.914251  353396 type.go:168] "Request Body" body=""
	I1213 10:36:21.914328  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:21.914668  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:22.414268  353396 type.go:168] "Request Body" body=""
	I1213 10:36:22.414350  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:22.414680  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:22.914474  353396 type.go:168] "Request Body" body=""
	I1213 10:36:22.914553  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:22.914836  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:23.414235  353396 type.go:168] "Request Body" body=""
	I1213 10:36:23.414326  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:23.414670  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:23.414743  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:23.914266  353396 type.go:168] "Request Body" body=""
	I1213 10:36:23.914367  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:23.914763  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:24.414152  353396 type.go:168] "Request Body" body=""
	I1213 10:36:24.414223  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:24.414481  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:24.914197  353396 type.go:168] "Request Body" body=""
	I1213 10:36:24.914304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:24.914663  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:25.414263  353396 type.go:168] "Request Body" body=""
	I1213 10:36:25.414339  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:25.414676  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:25.914948  353396 type.go:168] "Request Body" body=""
	I1213 10:36:25.915020  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:25.915277  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:25.915318  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:26.415116  353396 type.go:168] "Request Body" body=""
	I1213 10:36:26.415208  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:26.415550  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:26.914250  353396 type.go:168] "Request Body" body=""
	I1213 10:36:26.914329  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:26.914612  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:27.414291  353396 type.go:168] "Request Body" body=""
	I1213 10:36:27.414364  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:27.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:27.914739  353396 type.go:168] "Request Body" body=""
	I1213 10:36:27.914816  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:27.915095  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:28.414897  353396 type.go:168] "Request Body" body=""
	I1213 10:36:28.414982  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:28.415303  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:28.415358  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:28.915084  353396 type.go:168] "Request Body" body=""
	I1213 10:36:28.915156  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:28.915451  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:29.414204  353396 type.go:168] "Request Body" body=""
	I1213 10:36:29.414283  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:29.414602  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:29.914216  353396 type.go:168] "Request Body" body=""
	I1213 10:36:29.914292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:29.914661  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:30.414927  353396 type.go:168] "Request Body" body=""
	I1213 10:36:30.415000  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:30.415303  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:30.915117  353396 type.go:168] "Request Body" body=""
	I1213 10:36:30.915200  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:30.915511  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:30.915566  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:31.414255  353396 type.go:168] "Request Body" body=""
	I1213 10:36:31.414349  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:31.414739  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:31.914164  353396 type.go:168] "Request Body" body=""
	I1213 10:36:31.914237  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:31.914519  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:32.414229  353396 type.go:168] "Request Body" body=""
	I1213 10:36:32.414304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:32.414647  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:32.914523  353396 type.go:168] "Request Body" body=""
	I1213 10:36:32.914604  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:32.914915  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:33.414159  353396 type.go:168] "Request Body" body=""
	I1213 10:36:33.414232  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:33.414567  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:33.414632  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:33.914300  353396 type.go:168] "Request Body" body=""
	I1213 10:36:33.914382  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:33.914670  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:34.414374  353396 type.go:168] "Request Body" body=""
	I1213 10:36:34.414451  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:34.414727  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:34.914184  353396 type.go:168] "Request Body" body=""
	I1213 10:36:34.914264  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:34.914587  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:35.414286  353396 type.go:168] "Request Body" body=""
	I1213 10:36:35.414359  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:35.414670  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:35.414741  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:35.914405  353396 type.go:168] "Request Body" body=""
	I1213 10:36:35.914489  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:35.914832  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:36.415085  353396 type.go:168] "Request Body" body=""
	I1213 10:36:36.415160  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:36.415449  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:36.914164  353396 type.go:168] "Request Body" body=""
	I1213 10:36:36.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:36.914585  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:37.414308  353396 type.go:168] "Request Body" body=""
	I1213 10:36:37.414384  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:37.414780  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:37.414840  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:37.914758  353396 type.go:168] "Request Body" body=""
	I1213 10:36:37.914831  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:37.915157  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:38.414970  353396 type.go:168] "Request Body" body=""
	I1213 10:36:38.415052  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:38.415405  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:38.915122  353396 type.go:168] "Request Body" body=""
	I1213 10:36:38.915210  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:38.915558  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:39.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:36:39.414237  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:39.414542  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:39.914247  353396 type.go:168] "Request Body" body=""
	I1213 10:36:39.914324  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:39.914669  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:39.914747  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:40.414415  353396 type.go:168] "Request Body" body=""
	I1213 10:36:40.414494  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:40.414850  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:40.915098  353396 type.go:168] "Request Body" body=""
	I1213 10:36:40.915172  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:40.915425  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:41.414124  353396 type.go:168] "Request Body" body=""
	I1213 10:36:41.414207  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:41.414558  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:41.914180  353396 type.go:168] "Request Body" body=""
	I1213 10:36:41.914266  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:41.914604  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:42.415138  353396 type.go:168] "Request Body" body=""
	I1213 10:36:42.415216  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:42.415488  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:42.415535  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:42.914549  353396 type.go:168] "Request Body" body=""
	I1213 10:36:42.914622  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:42.914929  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:43.414232  353396 type.go:168] "Request Body" body=""
	I1213 10:36:43.414317  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:43.414680  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:43.914384  353396 type.go:168] "Request Body" body=""
	I1213 10:36:43.914452  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:43.914730  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:44.414224  353396 type.go:168] "Request Body" body=""
	I1213 10:36:44.414302  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:44.414657  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:44.914395  353396 type.go:168] "Request Body" body=""
	I1213 10:36:44.914480  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:44.914836  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:44.914896  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:45.414191  353396 type.go:168] "Request Body" body=""
	I1213 10:36:45.414264  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:45.414567  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:45.914170  353396 type.go:168] "Request Body" body=""
	I1213 10:36:45.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:45.914607  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:46.414263  353396 type.go:168] "Request Body" body=""
	I1213 10:36:46.414343  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:46.414668  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:46.914163  353396 type.go:168] "Request Body" body=""
	I1213 10:36:46.914242  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:46.914578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:47.414274  353396 type.go:168] "Request Body" body=""
	I1213 10:36:47.414359  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:47.414709  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:47.414762  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:47.914884  353396 type.go:168] "Request Body" body=""
	I1213 10:36:47.914961  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:47.915333  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:48.415033  353396 type.go:168] "Request Body" body=""
	I1213 10:36:48.415102  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:48.415408  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:48.914142  353396 type.go:168] "Request Body" body=""
	I1213 10:36:48.914217  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:48.914551  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:49.414248  353396 type.go:168] "Request Body" body=""
	I1213 10:36:49.414332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:49.414653  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:49.914171  353396 type.go:168] "Request Body" body=""
	I1213 10:36:49.914239  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:49.914490  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:49.914533  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:50.414250  353396 type.go:168] "Request Body" body=""
	I1213 10:36:50.414332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:50.414655  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:50.914247  353396 type.go:168] "Request Body" body=""
	I1213 10:36:50.914325  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:50.914719  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:51.415136  353396 type.go:168] "Request Body" body=""
	I1213 10:36:51.415212  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:51.415495  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:51.914194  353396 type.go:168] "Request Body" body=""
	I1213 10:36:51.914271  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:51.914606  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:51.914663  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:52.414196  353396 type.go:168] "Request Body" body=""
	I1213 10:36:52.414278  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:52.414628  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:52.914521  353396 type.go:168] "Request Body" body=""
	I1213 10:36:52.914591  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:52.914917  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:53.414620  353396 type.go:168] "Request Body" body=""
	I1213 10:36:53.414716  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:53.415008  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:53.914831  353396 type.go:168] "Request Body" body=""
	I1213 10:36:53.914908  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:53.915259  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:53.915316  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:54.415073  353396 type.go:168] "Request Body" body=""
	I1213 10:36:54.415143  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:54.415457  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:54.914176  353396 type.go:168] "Request Body" body=""
	I1213 10:36:54.914260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:54.914603  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:55.414307  353396 type.go:168] "Request Body" body=""
	I1213 10:36:55.414386  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:55.414744  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:55.914154  353396 type.go:168] "Request Body" body=""
	I1213 10:36:55.914224  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:55.914531  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:56.414237  353396 type.go:168] "Request Body" body=""
	I1213 10:36:56.414331  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:56.414644  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:56.414728  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:56.914208  353396 type.go:168] "Request Body" body=""
	I1213 10:36:56.914282  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:56.914593  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:57.414164  353396 type.go:168] "Request Body" body=""
	I1213 10:36:57.414233  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:57.414586  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:57.914740  353396 type.go:168] "Request Body" body=""
	I1213 10:36:57.914819  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:57.915172  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:58.414966  353396 type.go:168] "Request Body" body=""
	I1213 10:36:58.415044  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:58.415365  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:58.415427  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:58.914107  353396 type.go:168] "Request Body" body=""
	I1213 10:36:58.914182  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:58.914459  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:59.414161  353396 type.go:168] "Request Body" body=""
	I1213 10:36:59.414247  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:59.414593  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:59.914255  353396 type.go:168] "Request Body" body=""
	I1213 10:36:59.914339  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:59.914625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:00.414213  353396 type.go:168] "Request Body" body=""
	I1213 10:37:00.414303  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:00.414598  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:00.914236  353396 type.go:168] "Request Body" body=""
	I1213 10:37:00.914308  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:00.914641  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:37:00.914708  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:37:01.414238  353396 type.go:168] "Request Body" body=""
	I1213 10:37:01.414328  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:01.414724  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:01.914174  353396 type.go:168] "Request Body" body=""
	I1213 10:37:01.914261  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:01.914555  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:02.414290  353396 type.go:168] "Request Body" body=""
	I1213 10:37:02.414375  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:02.414752  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:02.914840  353396 type.go:168] "Request Body" body=""
	I1213 10:37:02.914918  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:02.915213  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:37:02.915263  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:37:03.415012  353396 type.go:168] "Request Body" body=""
	I1213 10:37:03.415090  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:03.415417  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:03.914198  353396 type.go:168] "Request Body" body=""
	I1213 10:37:03.914276  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:03.914604  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:04.414267  353396 type.go:168] "Request Body" body=""
	I1213 10:37:04.414350  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:04.414724  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:04.914155  353396 type.go:168] "Request Body" body=""
	I1213 10:37:04.914256  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:04.914593  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:05.414221  353396 type.go:168] "Request Body" body=""
	I1213 10:37:05.414327  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:05.414680  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:37:05.414769  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:37:05.914428  353396 type.go:168] "Request Body" body=""
	I1213 10:37:05.914509  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:05.914816  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:06.414144  353396 type.go:168] "Request Body" body=""
	I1213 10:37:06.414222  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:06.414490  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:06.914508  353396 type.go:168] "Request Body" body=""
	I1213 10:37:06.914592  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:06.914976  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:07.414224  353396 type.go:168] "Request Body" body=""
	I1213 10:37:07.414306  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:07.414615  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:07.914710  353396 type.go:168] "Request Body" body=""
	I1213 10:37:07.914821  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:07.915135  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:37:07.915217  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:37:08.414751  353396 node_ready.go:38] duration metric: took 6m0.000751586s for node "functional-652709" to be "Ready" ...
	I1213 10:37:08.417881  353396 out.go:203] 
	W1213 10:37:08.420786  353396 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 10:37:08.420808  353396 out.go:285] * 
	W1213 10:37:08.422957  353396 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:37:08.425703  353396 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 10:37:15 functional-652709 containerd[5259]: time="2025-12-13T10:37:15.942332098Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:37:16 functional-652709 containerd[5259]: time="2025-12-13T10:37:16.961845527Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 13 10:37:16 functional-652709 containerd[5259]: time="2025-12-13T10:37:16.964100465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 13 10:37:16 functional-652709 containerd[5259]: time="2025-12-13T10:37:16.974175449Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:37:16 functional-652709 containerd[5259]: time="2025-12-13T10:37:16.975067256Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:37:17 functional-652709 containerd[5259]: time="2025-12-13T10:37:17.942354469Z" level=info msg="No images store for sha256:c6249fed01776cbcb41b36b4a4c0ab7eea746dbacf3e857d9f5cb60a67157990"
	Dec 13 10:37:17 functional-652709 containerd[5259]: time="2025-12-13T10:37:17.944548787Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-652709\""
	Dec 13 10:37:17 functional-652709 containerd[5259]: time="2025-12-13T10:37:17.952195292Z" level=info msg="ImageCreate event name:\"sha256:3e30c52a5eb43a8e5ba840b7293fbdeceebf98349701321a36a877e21e3b575a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:37:17 functional-652709 containerd[5259]: time="2025-12-13T10:37:17.952772233Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-652709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:37:18 functional-652709 containerd[5259]: time="2025-12-13T10:37:18.780385969Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 13 10:37:18 functional-652709 containerd[5259]: time="2025-12-13T10:37:18.782829258Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 13 10:37:18 functional-652709 containerd[5259]: time="2025-12-13T10:37:18.784790556Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 13 10:37:18 functional-652709 containerd[5259]: time="2025-12-13T10:37:18.796574348Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 13 10:37:19 functional-652709 containerd[5259]: time="2025-12-13T10:37:19.745952345Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 13 10:37:19 functional-652709 containerd[5259]: time="2025-12-13T10:37:19.748349159Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 13 10:37:19 functional-652709 containerd[5259]: time="2025-12-13T10:37:19.750670888Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 13 10:37:19 functional-652709 containerd[5259]: time="2025-12-13T10:37:19.760596347Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 13 10:37:19 functional-652709 containerd[5259]: time="2025-12-13T10:37:19.909948044Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 13 10:37:19 functional-652709 containerd[5259]: time="2025-12-13T10:37:19.912297375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 13 10:37:19 functional-652709 containerd[5259]: time="2025-12-13T10:37:19.919399883Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:37:19 functional-652709 containerd[5259]: time="2025-12-13T10:37:19.919749860Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:37:20 functional-652709 containerd[5259]: time="2025-12-13T10:37:20.069647350Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 13 10:37:20 functional-652709 containerd[5259]: time="2025-12-13T10:37:20.072047446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 13 10:37:20 functional-652709 containerd[5259]: time="2025-12-13T10:37:20.079456107Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:37:20 functional-652709 containerd[5259]: time="2025-12-13T10:37:20.079893502Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:21.833075    9248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:21.833932    9248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:21.835744    9248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:21.836428    9248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:21.838160    9248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +12.907693] overlayfs: idmapped layers are currently not supported
	[Dec13 09:25] overlayfs: idmapped layers are currently not supported
	[ +26.192425] overlayfs: idmapped layers are currently not supported
	[Dec13 09:26] overlayfs: idmapped layers are currently not supported
	[ +25.729788] overlayfs: idmapped layers are currently not supported
	[Dec13 09:27] overlayfs: idmapped layers are currently not supported
	[Dec13 09:28] overlayfs: idmapped layers are currently not supported
	[Dec13 09:31] overlayfs: idmapped layers are currently not supported
	[Dec13 09:32] overlayfs: idmapped layers are currently not supported
	[ +17.979093] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec13 09:43] overlayfs: idmapped layers are currently not supported
	[Dec13 09:45] overlayfs: idmapped layers are currently not supported
	[ +25.885102] overlayfs: idmapped layers are currently not supported
	[Dec13 09:46] overlayfs: idmapped layers are currently not supported
	[ +22.078149] overlayfs: idmapped layers are currently not supported
	[Dec13 09:47] overlayfs: idmapped layers are currently not supported
	[Dec13 09:48] overlayfs: idmapped layers are currently not supported
	[Dec13 09:49] overlayfs: idmapped layers are currently not supported
	[Dec13 09:51] overlayfs: idmapped layers are currently not supported
	[ +17.043564] overlayfs: idmapped layers are currently not supported
	[Dec13 09:52] overlayfs: idmapped layers are currently not supported
	[Dec13 09:53] overlayfs: idmapped layers are currently not supported
	[Dec13 10:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:19] hrtimer: interrupt took 21247146 ns
	
	
	==> kernel <==
	 10:37:21 up  3:19,  0 user,  load average: 0.74, 0.42, 0.81
	Linux functional-652709 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:37:18 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:37:19 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 824.
	Dec 13 10:37:19 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:19 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:19 functional-652709 kubelet[9023]: E1213 10:37:19.181607    9023 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:37:19 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:37:19 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:37:19 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 825.
	Dec 13 10:37:19 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:19 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:19 functional-652709 kubelet[9104]: E1213 10:37:19.913080    9104 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:37:19 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:37:19 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:37:20 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 13 10:37:20 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:20 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:20 functional-652709 kubelet[9146]: E1213 10:37:20.724075    9146 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:37:20 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:37:20 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:37:21 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 13 10:37:21 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:21 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:21 functional-652709 kubelet[9166]: E1213 10:37:21.469044    9166 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:37:21 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:37:21 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709: exit status 2 (360.352746ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-652709" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-652709 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-652709 get pods: exit status 1 (122.13726ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-652709 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-652709
helpers_test.go:244: (dbg) docker inspect functional-652709:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f",
	        "Created": "2025-12-13T10:22:44.366993781Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 347931,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:22:44.437030763Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/hosts",
	        "LogPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f-json.log",
	        "Name": "/functional-652709",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-652709:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-652709",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f",
	                "LowerDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095-init/diff:/var/lib/docker/overlay2/5d0ca86320f2c06765995d644a73af30cd6ddb36f5c1a2a6b1ebb24af53cdabd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/merged",
	                "UpperDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/diff",
	                "WorkDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-652709",
	                "Source": "/var/lib/docker/volumes/functional-652709/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-652709",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-652709",
	                "name.minikube.sigs.k8s.io": "functional-652709",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "52e527b5bd789a02eb7efb651200033ed4929e5fc7545e9df042d3f777cc9782",
	            "SandboxKey": "/var/run/docker/netns/52e527b5bd78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-652709": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:23:08:9e:cb:13",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "344f2b940117dadb28d1ef1328f911c0446307288fdfafebfe59f38e473f79cb",
	                    "EndpointID": "8954f96e5987202be5715e7023384fe862744778b2520bccba28c57814f0980f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-652709",
	                        "0f6101071ca2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-652709 -n functional-652709
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-652709 -n functional-652709: exit status 2 (333.640716ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-319494 image ls --format short --alsologtostderr                                                                                             │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image   │ functional-319494 image ls --format yaml --alsologtostderr                                                                                              │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ ssh     │ functional-319494 ssh pgrep buildkitd                                                                                                                   │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │                     │
	│ image   │ functional-319494 image ls --format json --alsologtostderr                                                                                              │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image   │ functional-319494 image build -t localhost/my-image:functional-319494 testdata/build --alsologtostderr                                                  │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image   │ functional-319494 image ls --format table --alsologtostderr                                                                                             │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image   │ functional-319494 image ls                                                                                                                              │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ delete  │ -p functional-319494                                                                                                                                    │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ start   │ -p functional-652709 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │                     │
	│ start   │ -p functional-652709 --alsologtostderr -v=8                                                                                                             │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:31 UTC │                     │
	│ cache   │ functional-652709 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ functional-652709 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ functional-652709 cache add registry.k8s.io/pause:latest                                                                                                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ functional-652709 cache add minikube-local-cache-test:functional-652709                                                                                 │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ functional-652709 cache delete minikube-local-cache-test:functional-652709                                                                              │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ ssh     │ functional-652709 ssh sudo crictl images                                                                                                                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ ssh     │ functional-652709 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ ssh     │ functional-652709 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │                     │
	│ cache   │ functional-652709 cache reload                                                                                                                          │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ ssh     │ functional-652709 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ kubectl │ functional-652709 kubectl -- --context functional-652709 get pods                                                                                       │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:31:02
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:31:02.672113  353396 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:31:02.672249  353396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:31:02.672258  353396 out.go:374] Setting ErrFile to fd 2...
	I1213 10:31:02.672263  353396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:31:02.672511  353396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 10:31:02.672909  353396 out.go:368] Setting JSON to false
	I1213 10:31:02.673776  353396 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11616,"bootTime":1765610247,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 10:31:02.673896  353396 start.go:143] virtualization:  
	I1213 10:31:02.677410  353396 out.go:179] * [functional-652709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:31:02.681384  353396 notify.go:221] Checking for updates...
	I1213 10:31:02.681459  353396 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:31:02.684444  353396 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:31:02.687336  353396 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:31:02.690317  353396 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 10:31:02.693212  353396 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:31:02.696019  353396 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:31:02.699466  353396 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:31:02.699577  353396 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:31:02.725188  353396 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:31:02.725318  353396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:31:02.796082  353396 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:31:02.785556605 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:31:02.796187  353396 docker.go:319] overlay module found
	I1213 10:31:02.799378  353396 out.go:179] * Using the docker driver based on existing profile
	I1213 10:31:02.802341  353396 start.go:309] selected driver: docker
	I1213 10:31:02.802370  353396 start.go:927] validating driver "docker" against &{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:31:02.802524  353396 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:31:02.802652  353396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:31:02.859333  353396 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:31:02.849982894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:31:02.859762  353396 cni.go:84] Creating CNI manager for ""
	I1213 10:31:02.859824  353396 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:31:02.859884  353396 start.go:353] cluster config:
	{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:31:02.863117  353396 out.go:179] * Starting "functional-652709" primary control-plane node in "functional-652709" cluster
	I1213 10:31:02.865981  353396 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 10:31:02.868957  353396 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:31:02.871941  353396 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:31:02.871997  353396 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 10:31:02.872008  353396 cache.go:65] Caching tarball of preloaded images
	I1213 10:31:02.872055  353396 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:31:02.872104  353396 preload.go:238] Found /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 10:31:02.872129  353396 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 10:31:02.872236  353396 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/config.json ...
	I1213 10:31:02.890218  353396 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:31:02.890243  353396 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:31:02.890259  353396 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:31:02.890291  353396 start.go:360] acquireMachinesLock for functional-652709: {Name:mk6e8c40fbbb5af0bb2468340fd710875030300d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:31:02.890351  353396 start.go:364] duration metric: took 34.691µs to acquireMachinesLock for "functional-652709"
	I1213 10:31:02.890374  353396 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:31:02.890380  353396 fix.go:54] fixHost starting: 
	I1213 10:31:02.890658  353396 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:31:02.911217  353396 fix.go:112] recreateIfNeeded on functional-652709: state=Running err=<nil>
	W1213 10:31:02.911248  353396 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:31:02.914505  353396 out.go:252] * Updating the running docker "functional-652709" container ...
	I1213 10:31:02.914550  353396 machine.go:94] provisionDockerMachine start ...
	I1213 10:31:02.914653  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:02.937238  353396 main.go:143] libmachine: Using SSH client type: native
	I1213 10:31:02.937582  353396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:31:02.937592  353396 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:31:03.091334  353396 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-652709
	
	I1213 10:31:03.091359  353396 ubuntu.go:182] provisioning hostname "functional-652709"
	I1213 10:31:03.091424  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:03.110422  353396 main.go:143] libmachine: Using SSH client type: native
	I1213 10:31:03.110837  353396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:31:03.110855  353396 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-652709 && echo "functional-652709" | sudo tee /etc/hostname
	I1213 10:31:03.277113  353396 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-652709
	
	I1213 10:31:03.277196  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:03.294664  353396 main.go:143] libmachine: Using SSH client type: native
	I1213 10:31:03.295057  353396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:31:03.295079  353396 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-652709' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-652709/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-652709' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:31:03.447182  353396 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:31:03.447207  353396 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 10:31:03.447240  353396 ubuntu.go:190] setting up certificates
	I1213 10:31:03.447256  353396 provision.go:84] configureAuth start
	I1213 10:31:03.447330  353396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-652709
	I1213 10:31:03.465044  353396 provision.go:143] copyHostCerts
	I1213 10:31:03.465100  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 10:31:03.465141  353396 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 10:31:03.465148  353396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 10:31:03.465220  353396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 10:31:03.465329  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 10:31:03.465349  353396 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 10:31:03.465353  353396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 10:31:03.465383  353396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 10:31:03.465436  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 10:31:03.465453  353396 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 10:31:03.465457  353396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 10:31:03.465486  353396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 10:31:03.465541  353396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.functional-652709 san=[127.0.0.1 192.168.49.2 functional-652709 localhost minikube]
	I1213 10:31:03.927648  353396 provision.go:177] copyRemoteCerts
	I1213 10:31:03.927724  353396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:31:03.927763  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:03.947692  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:04.064623  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 10:31:04.064688  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:31:04.082355  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 10:31:04.082418  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:31:04.100866  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 10:31:04.100930  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:31:04.121259  353396 provision.go:87] duration metric: took 673.978127ms to configureAuth
	I1213 10:31:04.121312  353396 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:31:04.121495  353396 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:31:04.121509  353396 machine.go:97] duration metric: took 1.206951102s to provisionDockerMachine
	I1213 10:31:04.121518  353396 start.go:293] postStartSetup for "functional-652709" (driver="docker")
	I1213 10:31:04.121529  353396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:31:04.121586  353396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:31:04.121633  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:04.139400  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:04.246752  353396 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:31:04.250273  353396 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 10:31:04.250297  353396 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 10:31:04.250302  353396 command_runner.go:130] > VERSION_ID="12"
	I1213 10:31:04.250307  353396 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 10:31:04.250312  353396 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 10:31:04.250316  353396 command_runner.go:130] > ID=debian
	I1213 10:31:04.250320  353396 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 10:31:04.250325  353396 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 10:31:04.250331  353396 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 10:31:04.250368  353396 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:31:04.250390  353396 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:31:04.250401  353396 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 10:31:04.250463  353396 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 10:31:04.250545  353396 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 10:31:04.250556  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> /etc/ssl/certs/3089152.pem
	I1213 10:31:04.250633  353396 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts -> hosts in /etc/test/nested/copy/308915
	I1213 10:31:04.250715  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts -> /etc/test/nested/copy/308915/hosts
	I1213 10:31:04.250766  353396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/308915
	I1213 10:31:04.258199  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 10:31:04.275892  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts --> /etc/test/nested/copy/308915/hosts (40 bytes)
	I1213 10:31:04.293256  353396 start.go:296] duration metric: took 171.721845ms for postStartSetup
	I1213 10:31:04.293373  353396 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:31:04.293418  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:04.310428  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:04.412061  353396 command_runner.go:130] > 11%
	I1213 10:31:04.412134  353396 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:31:04.417606  353396 command_runner.go:130] > 174G
	I1213 10:31:04.418241  353396 fix.go:56] duration metric: took 1.527856492s for fixHost
	I1213 10:31:04.418260  353396 start.go:83] releasing machines lock for "functional-652709", held for 1.527895524s
	I1213 10:31:04.418328  353396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-652709
	I1213 10:31:04.443217  353396 ssh_runner.go:195] Run: cat /version.json
	I1213 10:31:04.443268  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:04.443564  353396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:31:04.443617  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:04.481371  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:04.481516  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:04.669844  353396 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 10:31:04.669910  353396 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 10:31:04.670045  353396 ssh_runner.go:195] Run: systemctl --version
	I1213 10:31:04.676239  353396 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 10:31:04.676276  353396 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 10:31:04.676350  353396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 10:31:04.680689  353396 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 10:31:04.680854  353396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:31:04.680918  353396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:31:04.688793  353396 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:31:04.688818  353396 start.go:496] detecting cgroup driver to use...
	I1213 10:31:04.688851  353396 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:31:04.688909  353396 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 10:31:04.704425  353396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:31:04.717662  353396 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:31:04.717728  353396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:31:04.733551  353396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:31:04.746955  353396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:31:04.865557  353396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:31:04.977869  353396 docker.go:234] disabling docker service ...
	I1213 10:31:04.977950  353396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:31:04.992461  353396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:31:05.013428  353396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:31:05.135601  353396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:31:05.282715  353396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:31:05.296047  353396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:31:05.308957  353396 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1213 10:31:05.310188  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:31:05.319385  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:31:05.328561  353396 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:31:05.328627  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:31:05.337573  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:31:05.346847  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:31:05.355976  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:31:05.364985  353396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:31:05.373424  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:31:05.382892  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:31:05.391826  353396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:31:05.401136  353396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:31:05.407987  353396 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 10:31:05.408928  353396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:31:05.416444  353396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:31:05.526748  353396 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:31:05.655433  353396 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 10:31:05.655515  353396 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 10:31:05.659353  353396 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1213 10:31:05.659378  353396 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 10:31:05.659389  353396 command_runner.go:130] > Device: 0,72	Inode: 1622        Links: 1
	I1213 10:31:05.659396  353396 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:31:05.659402  353396 command_runner.go:130] > Access: 2025-12-13 10:31:05.610211940 +0000
	I1213 10:31:05.659407  353396 command_runner.go:130] > Modify: 2025-12-13 10:31:05.610211940 +0000
	I1213 10:31:05.659412  353396 command_runner.go:130] > Change: 2025-12-13 10:31:05.610211940 +0000
	I1213 10:31:05.659416  353396 command_runner.go:130] >  Birth: -
	I1213 10:31:05.660005  353396 start.go:564] Will wait 60s for crictl version
	I1213 10:31:05.660063  353396 ssh_runner.go:195] Run: which crictl
	I1213 10:31:05.663492  353396 command_runner.go:130] > /usr/local/bin/crictl
	I1213 10:31:05.663579  353396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:31:05.685881  353396 command_runner.go:130] > Version:  0.1.0
	I1213 10:31:05.685946  353396 command_runner.go:130] > RuntimeName:  containerd
	I1213 10:31:05.686097  353396 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1213 10:31:05.686253  353396 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 10:31:05.688463  353396 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 10:31:05.688528  353396 ssh_runner.go:195] Run: containerd --version
	I1213 10:31:05.706883  353396 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 10:31:05.709639  353396 ssh_runner.go:195] Run: containerd --version
	I1213 10:31:05.727187  353396 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 10:31:05.735610  353396 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 10:31:05.738579  353396 cli_runner.go:164] Run: docker network inspect functional-652709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:31:05.753316  353396 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:31:05.757039  353396 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 10:31:05.757213  353396 kubeadm.go:884] updating cluster {Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:31:05.757336  353396 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:31:05.757417  353396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:31:05.778952  353396 command_runner.go:130] > {
	I1213 10:31:05.778976  353396 command_runner.go:130] >   "images":  [
	I1213 10:31:05.778980  353396 command_runner.go:130] >     {
	I1213 10:31:05.778990  353396 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:31:05.778995  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779001  353396 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:31:05.779005  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779009  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779018  353396 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 10:31:05.779024  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779028  353396 command_runner.go:130] >       "size":  "40636774",
	I1213 10:31:05.779032  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779041  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779045  353396 command_runner.go:130] >     },
	I1213 10:31:05.779053  353396 command_runner.go:130] >     {
	I1213 10:31:05.779066  353396 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:31:05.779074  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779080  353396 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:31:05.779087  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779091  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779102  353396 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:31:05.779106  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779110  353396 command_runner.go:130] >       "size":  "8034419",
	I1213 10:31:05.779116  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779120  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779128  353396 command_runner.go:130] >     },
	I1213 10:31:05.779131  353396 command_runner.go:130] >     {
	I1213 10:31:05.779138  353396 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:31:05.779145  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779150  353396 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:31:05.779157  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779163  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779175  353396 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 10:31:05.779181  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779185  353396 command_runner.go:130] >       "size":  "21168808",
	I1213 10:31:05.779190  353396 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:31:05.779195  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779199  353396 command_runner.go:130] >     },
	I1213 10:31:05.779204  353396 command_runner.go:130] >     {
	I1213 10:31:05.779211  353396 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:31:05.779218  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779224  353396 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:31:05.779231  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779235  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779246  353396 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 10:31:05.779252  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779257  353396 command_runner.go:130] >       "size":  "21136588",
	I1213 10:31:05.779267  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.779275  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.779279  353396 command_runner.go:130] >       },
	I1213 10:31:05.779283  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779290  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779299  353396 command_runner.go:130] >     },
	I1213 10:31:05.779303  353396 command_runner.go:130] >     {
	I1213 10:31:05.779314  353396 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:31:05.779321  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779327  353396 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:31:05.779334  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779338  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779350  353396 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 10:31:05.779357  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779361  353396 command_runner.go:130] >       "size":  "24678359",
	I1213 10:31:05.779365  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.779375  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.779384  353396 command_runner.go:130] >       },
	I1213 10:31:05.779388  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779396  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779400  353396 command_runner.go:130] >     },
	I1213 10:31:05.779407  353396 command_runner.go:130] >     {
	I1213 10:31:05.779414  353396 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:31:05.779421  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779428  353396 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:31:05.779435  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779439  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779450  353396 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 10:31:05.779454  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779461  353396 command_runner.go:130] >       "size":  "20661043",
	I1213 10:31:05.779465  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.779473  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.779477  353396 command_runner.go:130] >       },
	I1213 10:31:05.779489  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779497  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779501  353396 command_runner.go:130] >     },
	I1213 10:31:05.779507  353396 command_runner.go:130] >     {
	I1213 10:31:05.779515  353396 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:31:05.779522  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779527  353396 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:31:05.779534  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779538  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779546  353396 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:31:05.779553  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779557  353396 command_runner.go:130] >       "size":  "22429671",
	I1213 10:31:05.779561  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779567  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779571  353396 command_runner.go:130] >     },
	I1213 10:31:05.779578  353396 command_runner.go:130] >     {
	I1213 10:31:05.779586  353396 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:31:05.779593  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779600  353396 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:31:05.779606  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779610  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779622  353396 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 10:31:05.779628  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779633  353396 command_runner.go:130] >       "size":  "15391364",
	I1213 10:31:05.779641  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.779645  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.779648  353396 command_runner.go:130] >       },
	I1213 10:31:05.779654  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779658  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.779666  353396 command_runner.go:130] >     },
	I1213 10:31:05.779669  353396 command_runner.go:130] >     {
	I1213 10:31:05.779681  353396 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:31:05.779688  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.779698  353396 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:31:05.779704  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779709  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.779720  353396 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 10:31:05.779726  353396 command_runner.go:130] >       ],
	I1213 10:31:05.779730  353396 command_runner.go:130] >       "size":  "267939",
	I1213 10:31:05.779735  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.779741  353396 command_runner.go:130] >         "value":  "65535"
	I1213 10:31:05.779744  353396 command_runner.go:130] >       },
	I1213 10:31:05.779753  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.779758  353396 command_runner.go:130] >       "pinned":  true
	I1213 10:31:05.779764  353396 command_runner.go:130] >     }
	I1213 10:31:05.779767  353396 command_runner.go:130] >   ]
	I1213 10:31:05.779770  353396 command_runner.go:130] > }
	I1213 10:31:05.781791  353396 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:31:05.781813  353396 containerd.go:534] Images already preloaded, skipping extraction
	I1213 10:31:05.781881  353396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:31:05.805396  353396 command_runner.go:130] > {
	I1213 10:31:05.805420  353396 command_runner.go:130] >   "images":  [
	I1213 10:31:05.805426  353396 command_runner.go:130] >     {
	I1213 10:31:05.805436  353396 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:31:05.805441  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805447  353396 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:31:05.805452  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805456  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805465  353396 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 10:31:05.805471  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805477  353396 command_runner.go:130] >       "size":  "40636774",
	I1213 10:31:05.805485  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.805490  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.805501  353396 command_runner.go:130] >     },
	I1213 10:31:05.805504  353396 command_runner.go:130] >     {
	I1213 10:31:05.805512  353396 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:31:05.805517  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805523  353396 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:31:05.805528  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805543  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805556  353396 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:31:05.805566  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805576  353396 command_runner.go:130] >       "size":  "8034419",
	I1213 10:31:05.805580  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.805590  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.805594  353396 command_runner.go:130] >     },
	I1213 10:31:05.805601  353396 command_runner.go:130] >     {
	I1213 10:31:05.805608  353396 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:31:05.805619  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805625  353396 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:31:05.805630  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805655  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805669  353396 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 10:31:05.805675  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805680  353396 command_runner.go:130] >       "size":  "21168808",
	I1213 10:31:05.805687  353396 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:31:05.805693  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.805697  353396 command_runner.go:130] >     },
	I1213 10:31:05.805701  353396 command_runner.go:130] >     {
	I1213 10:31:05.805707  353396 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:31:05.805715  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805720  353396 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:31:05.805727  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805732  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805743  353396 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 10:31:05.805750  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805754  353396 command_runner.go:130] >       "size":  "21136588",
	I1213 10:31:05.805762  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.805772  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.805778  353396 command_runner.go:130] >       },
	I1213 10:31:05.805783  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.805787  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.805795  353396 command_runner.go:130] >     },
	I1213 10:31:05.805803  353396 command_runner.go:130] >     {
	I1213 10:31:05.805810  353396 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:31:05.805818  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805824  353396 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:31:05.805846  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805855  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805863  353396 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 10:31:05.805867  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805873  353396 command_runner.go:130] >       "size":  "24678359",
	I1213 10:31:05.805877  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.805891  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.805894  353396 command_runner.go:130] >       },
	I1213 10:31:05.805899  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.805906  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.805910  353396 command_runner.go:130] >     },
	I1213 10:31:05.805917  353396 command_runner.go:130] >     {
	I1213 10:31:05.805924  353396 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:31:05.805931  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.805938  353396 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:31:05.805941  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805946  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.805956  353396 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 10:31:05.805963  353396 command_runner.go:130] >       ],
	I1213 10:31:05.805967  353396 command_runner.go:130] >       "size":  "20661043",
	I1213 10:31:05.805972  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.805979  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.805983  353396 command_runner.go:130] >       },
	I1213 10:31:05.805991  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.805995  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.806002  353396 command_runner.go:130] >     },
	I1213 10:31:05.806005  353396 command_runner.go:130] >     {
	I1213 10:31:05.806012  353396 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:31:05.806021  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.806032  353396 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:31:05.806036  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806040  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.806048  353396 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:31:05.806055  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806059  353396 command_runner.go:130] >       "size":  "22429671",
	I1213 10:31:05.806068  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.806072  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.806078  353396 command_runner.go:130] >     },
	I1213 10:31:05.806082  353396 command_runner.go:130] >     {
	I1213 10:31:05.806089  353396 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:31:05.806096  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.806101  353396 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:31:05.806109  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806113  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.806124  353396 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 10:31:05.806131  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806135  353396 command_runner.go:130] >       "size":  "15391364",
	I1213 10:31:05.806139  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.806147  353396 command_runner.go:130] >         "value":  "0"
	I1213 10:31:05.806151  353396 command_runner.go:130] >       },
	I1213 10:31:05.806159  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.806164  353396 command_runner.go:130] >       "pinned":  false
	I1213 10:31:05.806171  353396 command_runner.go:130] >     },
	I1213 10:31:05.806174  353396 command_runner.go:130] >     {
	I1213 10:31:05.806180  353396 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:31:05.806186  353396 command_runner.go:130] >       "repoTags":  [
	I1213 10:31:05.806191  353396 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:31:05.806197  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806202  353396 command_runner.go:130] >       "repoDigests":  [
	I1213 10:31:05.806213  353396 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 10:31:05.806217  353396 command_runner.go:130] >       ],
	I1213 10:31:05.806230  353396 command_runner.go:130] >       "size":  "267939",
	I1213 10:31:05.806238  353396 command_runner.go:130] >       "uid":  {
	I1213 10:31:05.806242  353396 command_runner.go:130] >         "value":  "65535"
	I1213 10:31:05.806251  353396 command_runner.go:130] >       },
	I1213 10:31:05.806255  353396 command_runner.go:130] >       "username":  "",
	I1213 10:31:05.806259  353396 command_runner.go:130] >       "pinned":  true
	I1213 10:31:05.806262  353396 command_runner.go:130] >     }
	I1213 10:31:05.806267  353396 command_runner.go:130] >   ]
	I1213 10:31:05.806271  353396 command_runner.go:130] > }
	I1213 10:31:05.808725  353396 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:31:05.808749  353396 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:31:05.808757  353396 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 10:31:05.808887  353396 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-652709 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:31:05.808967  353396 ssh_runner.go:195] Run: sudo crictl info
	I1213 10:31:05.831572  353396 command_runner.go:130] > {
	I1213 10:31:05.831594  353396 command_runner.go:130] >   "cniconfig": {
	I1213 10:31:05.831601  353396 command_runner.go:130] >     "Networks": [
	I1213 10:31:05.831604  353396 command_runner.go:130] >       {
	I1213 10:31:05.831609  353396 command_runner.go:130] >         "Config": {
	I1213 10:31:05.831614  353396 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1213 10:31:05.831619  353396 command_runner.go:130] >           "Name": "cni-loopback",
	I1213 10:31:05.831623  353396 command_runner.go:130] >           "Plugins": [
	I1213 10:31:05.831627  353396 command_runner.go:130] >             {
	I1213 10:31:05.831631  353396 command_runner.go:130] >               "Network": {
	I1213 10:31:05.831635  353396 command_runner.go:130] >                 "ipam": {},
	I1213 10:31:05.831641  353396 command_runner.go:130] >                 "type": "loopback"
	I1213 10:31:05.831650  353396 command_runner.go:130] >               },
	I1213 10:31:05.831662  353396 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1213 10:31:05.831670  353396 command_runner.go:130] >             }
	I1213 10:31:05.831674  353396 command_runner.go:130] >           ],
	I1213 10:31:05.831684  353396 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1213 10:31:05.831688  353396 command_runner.go:130] >         },
	I1213 10:31:05.831696  353396 command_runner.go:130] >         "IFName": "lo"
	I1213 10:31:05.831703  353396 command_runner.go:130] >       }
	I1213 10:31:05.831707  353396 command_runner.go:130] >     ],
	I1213 10:31:05.831712  353396 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1213 10:31:05.831720  353396 command_runner.go:130] >     "PluginDirs": [
	I1213 10:31:05.831724  353396 command_runner.go:130] >       "/opt/cni/bin"
	I1213 10:31:05.831731  353396 command_runner.go:130] >     ],
	I1213 10:31:05.831736  353396 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1213 10:31:05.831743  353396 command_runner.go:130] >     "Prefix": "eth"
	I1213 10:31:05.831747  353396 command_runner.go:130] >   },
	I1213 10:31:05.831754  353396 command_runner.go:130] >   "config": {
	I1213 10:31:05.831762  353396 command_runner.go:130] >     "cdiSpecDirs": [
	I1213 10:31:05.831765  353396 command_runner.go:130] >       "/etc/cdi",
	I1213 10:31:05.831781  353396 command_runner.go:130] >       "/var/run/cdi"
	I1213 10:31:05.831789  353396 command_runner.go:130] >     ],
	I1213 10:31:05.831793  353396 command_runner.go:130] >     "cni": {
	I1213 10:31:05.831797  353396 command_runner.go:130] >       "binDir": "",
	I1213 10:31:05.831801  353396 command_runner.go:130] >       "binDirs": [
	I1213 10:31:05.831810  353396 command_runner.go:130] >         "/opt/cni/bin"
	I1213 10:31:05.831814  353396 command_runner.go:130] >       ],
	I1213 10:31:05.831818  353396 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1213 10:31:05.831821  353396 command_runner.go:130] >       "confTemplate": "",
	I1213 10:31:05.831825  353396 command_runner.go:130] >       "ipPref": "",
	I1213 10:31:05.831829  353396 command_runner.go:130] >       "maxConfNum": 1,
	I1213 10:31:05.831832  353396 command_runner.go:130] >       "setupSerially": false,
	I1213 10:31:05.831837  353396 command_runner.go:130] >       "useInternalLoopback": false
	I1213 10:31:05.831840  353396 command_runner.go:130] >     },
	I1213 10:31:05.831851  353396 command_runner.go:130] >     "containerd": {
	I1213 10:31:05.831859  353396 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1213 10:31:05.831864  353396 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1213 10:31:05.831869  353396 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1213 10:31:05.831872  353396 command_runner.go:130] >       "runtimes": {
	I1213 10:31:05.831875  353396 command_runner.go:130] >         "runc": {
	I1213 10:31:05.831879  353396 command_runner.go:130] >           "ContainerAnnotations": null,
	I1213 10:31:05.831884  353396 command_runner.go:130] >           "PodAnnotations": null,
	I1213 10:31:05.831891  353396 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1213 10:31:05.831895  353396 command_runner.go:130] >           "cgroupWritable": false,
	I1213 10:31:05.831899  353396 command_runner.go:130] >           "cniConfDir": "",
	I1213 10:31:05.831905  353396 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1213 10:31:05.831910  353396 command_runner.go:130] >           "io_type": "",
	I1213 10:31:05.831919  353396 command_runner.go:130] >           "options": {
	I1213 10:31:05.831924  353396 command_runner.go:130] >             "BinaryName": "",
	I1213 10:31:05.831929  353396 command_runner.go:130] >             "CriuImagePath": "",
	I1213 10:31:05.831936  353396 command_runner.go:130] >             "CriuWorkPath": "",
	I1213 10:31:05.831940  353396 command_runner.go:130] >             "IoGid": 0,
	I1213 10:31:05.831948  353396 command_runner.go:130] >             "IoUid": 0,
	I1213 10:31:05.831953  353396 command_runner.go:130] >             "NoNewKeyring": false,
	I1213 10:31:05.831961  353396 command_runner.go:130] >             "Root": "",
	I1213 10:31:05.831965  353396 command_runner.go:130] >             "ShimCgroup": "",
	I1213 10:31:05.831970  353396 command_runner.go:130] >             "SystemdCgroup": false
	I1213 10:31:05.831992  353396 command_runner.go:130] >           },
	I1213 10:31:05.831998  353396 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1213 10:31:05.832004  353396 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1213 10:31:05.832011  353396 command_runner.go:130] >           "runtimePath": "",
	I1213 10:31:05.832017  353396 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1213 10:31:05.832025  353396 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1213 10:31:05.832030  353396 command_runner.go:130] >           "snapshotter": ""
	I1213 10:31:05.832037  353396 command_runner.go:130] >         }
	I1213 10:31:05.832040  353396 command_runner.go:130] >       }
	I1213 10:31:05.832043  353396 command_runner.go:130] >     },
	I1213 10:31:05.832055  353396 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1213 10:31:05.832065  353396 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1213 10:31:05.832073  353396 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1213 10:31:05.832081  353396 command_runner.go:130] >     "disableApparmor": false,
	I1213 10:31:05.832086  353396 command_runner.go:130] >     "disableHugetlbController": true,
	I1213 10:31:05.832093  353396 command_runner.go:130] >     "disableProcMount": false,
	I1213 10:31:05.832098  353396 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1213 10:31:05.832106  353396 command_runner.go:130] >     "enableCDI": true,
	I1213 10:31:05.832110  353396 command_runner.go:130] >     "enableSelinux": false,
	I1213 10:31:05.832118  353396 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1213 10:31:05.832123  353396 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1213 10:31:05.832131  353396 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1213 10:31:05.832135  353396 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1213 10:31:05.832140  353396 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1213 10:31:05.832144  353396 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1213 10:31:05.832151  353396 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1213 10:31:05.832157  353396 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1213 10:31:05.832165  353396 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1213 10:31:05.832171  353396 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1213 10:31:05.832180  353396 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1213 10:31:05.832185  353396 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1213 10:31:05.832192  353396 command_runner.go:130] >   },
	I1213 10:31:05.832195  353396 command_runner.go:130] >   "features": {
	I1213 10:31:05.832204  353396 command_runner.go:130] >     "supplemental_groups_policy": true
	I1213 10:31:05.832208  353396 command_runner.go:130] >   },
	I1213 10:31:05.832212  353396 command_runner.go:130] >   "golang": "go1.24.9",
	I1213 10:31:05.832222  353396 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 10:31:05.832235  353396 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 10:31:05.832240  353396 command_runner.go:130] >   "runtimeHandlers": [
	I1213 10:31:05.832245  353396 command_runner.go:130] >     {
	I1213 10:31:05.832248  353396 command_runner.go:130] >       "features": {
	I1213 10:31:05.832257  353396 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 10:31:05.832262  353396 command_runner.go:130] >         "user_namespaces": true
	I1213 10:31:05.832268  353396 command_runner.go:130] >       }
	I1213 10:31:05.832276  353396 command_runner.go:130] >     },
	I1213 10:31:05.832283  353396 command_runner.go:130] >     {
	I1213 10:31:05.832287  353396 command_runner.go:130] >       "features": {
	I1213 10:31:05.832295  353396 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 10:31:05.832299  353396 command_runner.go:130] >         "user_namespaces": true
	I1213 10:31:05.832302  353396 command_runner.go:130] >       },
	I1213 10:31:05.832307  353396 command_runner.go:130] >       "name": "runc"
	I1213 10:31:05.832310  353396 command_runner.go:130] >     }
	I1213 10:31:05.832313  353396 command_runner.go:130] >   ],
	I1213 10:31:05.832316  353396 command_runner.go:130] >   "status": {
	I1213 10:31:05.832320  353396 command_runner.go:130] >     "conditions": [
	I1213 10:31:05.832325  353396 command_runner.go:130] >       {
	I1213 10:31:05.832330  353396 command_runner.go:130] >         "message": "",
	I1213 10:31:05.832337  353396 command_runner.go:130] >         "reason": "",
	I1213 10:31:05.832344  353396 command_runner.go:130] >         "status": true,
	I1213 10:31:05.832354  353396 command_runner.go:130] >         "type": "RuntimeReady"
	I1213 10:31:05.832362  353396 command_runner.go:130] >       },
	I1213 10:31:05.832365  353396 command_runner.go:130] >       {
	I1213 10:31:05.832375  353396 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1213 10:31:05.832380  353396 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1213 10:31:05.832383  353396 command_runner.go:130] >         "status": false,
	I1213 10:31:05.832388  353396 command_runner.go:130] >         "type": "NetworkReady"
	I1213 10:31:05.832396  353396 command_runner.go:130] >       },
	I1213 10:31:05.832399  353396 command_runner.go:130] >       {
	I1213 10:31:05.832422  353396 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1213 10:31:05.832434  353396 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1213 10:31:05.832444  353396 command_runner.go:130] >         "status": false,
	I1213 10:31:05.832451  353396 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1213 10:31:05.832454  353396 command_runner.go:130] >       }
	I1213 10:31:05.832457  353396 command_runner.go:130] >     ]
	I1213 10:31:05.832461  353396 command_runner.go:130] >   }
	I1213 10:31:05.832463  353396 command_runner.go:130] > }
	I1213 10:31:05.834983  353396 cni.go:84] Creating CNI manager for ""
	I1213 10:31:05.835008  353396 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:31:05.835032  353396 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:31:05.835055  353396 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-652709 NodeName:functional-652709 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:31:05.835177  353396 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-652709"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:31:05.835253  353396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:31:05.843333  353396 command_runner.go:130] > kubeadm
	I1213 10:31:05.843355  353396 command_runner.go:130] > kubectl
	I1213 10:31:05.843360  353396 command_runner.go:130] > kubelet
	I1213 10:31:05.843375  353396 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:31:05.843451  353396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:31:05.851169  353396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 10:31:05.865230  353396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:31:05.877883  353396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 10:31:05.891827  353396 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:31:05.896023  353396 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 10:31:05.896126  353396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:31:06.037110  353396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:31:06.663693  353396 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709 for IP: 192.168.49.2
	I1213 10:31:06.663826  353396 certs.go:195] generating shared ca certs ...
	I1213 10:31:06.663858  353396 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:31:06.664061  353396 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 10:31:06.664135  353396 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 10:31:06.664169  353396 certs.go:257] generating profile certs ...
	I1213 10:31:06.664331  353396 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.key
	I1213 10:31:06.664442  353396 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key.86e7afd1
	I1213 10:31:06.664517  353396 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key
	I1213 10:31:06.664552  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 10:31:06.664592  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 10:31:06.664634  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 10:31:06.664671  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 10:31:06.664701  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 10:31:06.664745  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 10:31:06.664781  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 10:31:06.664811  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 10:31:06.664893  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 10:31:06.664965  353396 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 10:31:06.664999  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:31:06.665056  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:31:06.665113  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:31:06.665174  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 10:31:06.665258  353396 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 10:31:06.665367  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> /usr/share/ca-certificates/3089152.pem
	I1213 10:31:06.665414  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:06.665453  353396 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem -> /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.666083  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:31:06.686373  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:31:06.706393  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:31:06.727893  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:31:06.748376  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:31:06.769115  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:31:06.788184  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:31:06.807317  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:31:06.826240  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 10:31:06.845063  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:31:06.863130  353396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 10:31:06.881577  353396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:31:06.894536  353396 ssh_runner.go:195] Run: openssl version
	I1213 10:31:06.900741  353396 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 10:31:06.901231  353396 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.909107  353396 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 10:31:06.916518  353396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.920250  353396 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.920295  353396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.920347  353396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 10:31:06.961321  353396 command_runner.go:130] > 51391683
	I1213 10:31:06.961405  353396 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:31:06.969200  353396 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 10:31:06.976714  353396 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 10:31:06.984537  353396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 10:31:06.988716  353396 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 10:31:06.988763  353396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 10:31:06.988817  353396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 10:31:07.029862  353396 command_runner.go:130] > 3ec20f2e
	I1213 10:31:07.030284  353396 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:31:07.037958  353396 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:07.045451  353396 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:31:07.053144  353396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:07.056994  353396 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:07.057051  353396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:07.057104  353396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:31:07.097856  353396 command_runner.go:130] > b5213941
	I1213 10:31:07.098292  353396 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:31:07.106039  353396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:31:07.109917  353396 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:31:07.109945  353396 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 10:31:07.109953  353396 command_runner.go:130] > Device: 259,1	Inode: 3399222     Links: 1
	I1213 10:31:07.109960  353396 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:31:07.109966  353396 command_runner.go:130] > Access: 2025-12-13 10:26:59.103845116 +0000
	I1213 10:31:07.109971  353396 command_runner.go:130] > Modify: 2025-12-13 10:22:52.641441584 +0000
	I1213 10:31:07.109977  353396 command_runner.go:130] > Change: 2025-12-13 10:22:52.641441584 +0000
	I1213 10:31:07.109982  353396 command_runner.go:130] >  Birth: 2025-12-13 10:22:52.641441584 +0000
	I1213 10:31:07.110079  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:31:07.151277  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.151699  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:31:07.192420  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.192514  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:31:07.233686  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.233923  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:31:07.275302  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.275760  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:31:07.324799  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.325290  353396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:31:07.377047  353396 command_runner.go:130] > Certificate will not expire
	I1213 10:31:07.377629  353396 kubeadm.go:401] StartCluster: {Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:31:07.377757  353396 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 10:31:07.377843  353396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:31:07.405423  353396 cri.go:89] found id: ""
	I1213 10:31:07.405508  353396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:31:07.414529  353396 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 10:31:07.414595  353396 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 10:31:07.414615  353396 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 10:31:07.415690  353396 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:31:07.415743  353396 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:31:07.415805  353396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:31:07.423401  353396 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:31:07.423850  353396 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-652709" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:31:07.423998  353396 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-307042/kubeconfig needs updating (will repair): [kubeconfig missing "functional-652709" cluster setting kubeconfig missing "functional-652709" context setting]
	I1213 10:31:07.424313  353396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:31:07.424829  353396 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:31:07.425032  353396 kapi.go:59] client config for functional-652709: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.key", CAFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:31:07.425626  353396 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 10:31:07.425778  353396 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 10:31:07.425812  353396 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 10:31:07.425854  353396 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 10:31:07.425888  353396 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 10:31:07.425723  353396 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 10:31:07.426245  353396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:31:07.437887  353396 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 10:31:07.437960  353396 kubeadm.go:602] duration metric: took 22.197398ms to restartPrimaryControlPlane
	I1213 10:31:07.437984  353396 kubeadm.go:403] duration metric: took 60.362619ms to StartCluster
	I1213 10:31:07.438027  353396 settings.go:142] acquiring lock: {Name:mk079e9a25ebbc2c8fbae42d4c6ed096a652c00b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:31:07.438107  353396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:31:07.438874  353396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:31:07.439133  353396 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 10:31:07.439572  353396 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:31:07.439649  353396 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:31:07.439895  353396 addons.go:70] Setting storage-provisioner=true in profile "functional-652709"
	I1213 10:31:07.439924  353396 addons.go:239] Setting addon storage-provisioner=true in "functional-652709"
	I1213 10:31:07.440086  353396 host.go:66] Checking if "functional-652709" exists ...
	I1213 10:31:07.439942  353396 addons.go:70] Setting default-storageclass=true in profile "functional-652709"
	I1213 10:31:07.440166  353396 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-652709"
	I1213 10:31:07.440530  353396 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:31:07.440672  353396 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:31:07.445924  353396 out.go:179] * Verifying Kubernetes components...
	I1213 10:31:07.449291  353396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:31:07.477163  353396 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:31:07.477818  353396 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:31:07.477982  353396 kapi.go:59] client config for functional-652709: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.key", CAFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:31:07.478289  353396 addons.go:239] Setting addon default-storageclass=true in "functional-652709"
	I1213 10:31:07.478317  353396 host.go:66] Checking if "functional-652709" exists ...
	I1213 10:31:07.478815  353396 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:31:07.480787  353396 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:07.480804  353396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:31:07.480857  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:07.506052  353396 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:07.506074  353396 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:31:07.506149  353396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:31:07.532221  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:07.553427  353396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:31:07.654835  353396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:31:07.677297  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:07.691553  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:08.413950  353396 node_ready.go:35] waiting up to 6m0s for node "functional-652709" to be "Ready" ...
	I1213 10:31:08.414025  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:08.414055  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.414088  353396 retry.go:31] will retry after 345.496875ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.414094  353396 type.go:168] "Request Body" body=""
	I1213 10:31:08.414127  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:08.414139  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.414145  353396 retry.go:31] will retry after 223.686843ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.414166  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:08.414498  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:08.639014  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:08.708995  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:08.709048  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.709067  353396 retry.go:31] will retry after 375.63163ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.760277  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:08.818789  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:08.818835  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.818856  353396 retry.go:31] will retry after 406.416897ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:08.915066  353396 type.go:168] "Request Body" body=""
	I1213 10:31:08.915143  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:08.915484  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:09.084944  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:09.142294  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:09.145823  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.145856  353396 retry.go:31] will retry after 462.162588ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.226047  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:09.284957  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:09.285005  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.285029  353396 retry.go:31] will retry after 590.841892ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.414170  353396 type.go:168] "Request Body" body=""
	I1213 10:31:09.414270  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:09.414569  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:09.609047  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:09.669723  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:09.669808  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.669831  353396 retry.go:31] will retry after 579.936823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.876057  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:09.914654  353396 type.go:168] "Request Body" body=""
	I1213 10:31:09.914781  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:09.915113  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:09.958653  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:09.959319  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:09.959356  353396 retry.go:31] will retry after 607.747477ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:10.250896  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:10.320327  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:10.320375  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:10.320395  353396 retry.go:31] will retry after 1.522220042s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:10.414670  353396 type.go:168] "Request Body" body=""
	I1213 10:31:10.414776  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:10.415078  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:10.415128  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:10.567453  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:10.637133  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:10.637170  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:10.637192  353396 retry.go:31] will retry after 1.738217883s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:10.914619  353396 type.go:168] "Request Body" body=""
	I1213 10:31:10.914713  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:10.915040  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:11.414837  353396 type.go:168] "Request Body" body=""
	I1213 10:31:11.414916  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:11.415223  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:11.842893  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:11.907661  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:11.907696  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:11.907728  353396 retry.go:31] will retry after 2.533033731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:11.915037  353396 type.go:168] "Request Body" body=""
	I1213 10:31:11.915117  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:11.915423  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:12.376116  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:12.414883  353396 type.go:168] "Request Body" body=""
	I1213 10:31:12.414962  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:12.415244  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:12.415286  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:12.436301  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:12.440043  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:12.440078  353396 retry.go:31] will retry after 2.549851387s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:12.914750  353396 type.go:168] "Request Body" body=""
	I1213 10:31:12.914826  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:12.915091  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:13.414886  353396 type.go:168] "Request Body" body=""
	I1213 10:31:13.414964  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:13.415325  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:13.914980  353396 type.go:168] "Request Body" body=""
	I1213 10:31:13.915058  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:13.915431  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:14.414144  353396 type.go:168] "Request Body" body=""
	I1213 10:31:14.414226  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:14.414516  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:14.441795  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:14.521460  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:14.521500  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:14.521521  353396 retry.go:31] will retry after 3.212514963s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:14.915209  353396 type.go:168] "Request Body" body=""
	I1213 10:31:14.915291  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:14.915586  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:14.915630  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:14.990917  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:15.080462  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:15.084181  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:15.084216  353396 retry.go:31] will retry after 3.733369975s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:15.414758  353396 type.go:168] "Request Body" body=""
	I1213 10:31:15.414836  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:15.415124  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:15.914893  353396 type.go:168] "Request Body" body=""
	I1213 10:31:15.914962  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:15.915239  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:16.415068  353396 type.go:168] "Request Body" body=""
	I1213 10:31:16.415147  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:16.415460  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:16.914139  353396 type.go:168] "Request Body" body=""
	I1213 10:31:16.914218  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:16.914520  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:17.414166  353396 type.go:168] "Request Body" body=""
	I1213 10:31:17.414237  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:17.414497  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:17.414542  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:17.734589  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:17.791638  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:17.795431  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:17.795464  353396 retry.go:31] will retry after 2.280639456s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:17.914828  353396 type.go:168] "Request Body" body=""
	I1213 10:31:17.914907  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:17.915229  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:18.415056  353396 type.go:168] "Request Body" body=""
	I1213 10:31:18.415138  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:18.415477  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:18.817969  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:18.882172  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:18.882215  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:18.882235  353396 retry.go:31] will retry after 4.138686797s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:18.914321  353396 type.go:168] "Request Body" body=""
	I1213 10:31:18.914392  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:18.914663  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:19.414265  353396 type.go:168] "Request Body" body=""
	I1213 10:31:19.414351  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:19.414671  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:19.414743  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:19.914452  353396 type.go:168] "Request Body" body=""
	I1213 10:31:19.914532  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:19.914885  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:20.077334  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:20.142139  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:20.142182  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:20.142203  353396 retry.go:31] will retry after 8.217804099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:20.414481  353396 type.go:168] "Request Body" body=""
	I1213 10:31:20.414554  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:20.414845  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:20.914228  353396 type.go:168] "Request Body" body=""
	I1213 10:31:20.914302  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:20.914590  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:21.414310  353396 type.go:168] "Request Body" body=""
	I1213 10:31:21.414387  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:21.414748  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:21.414804  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:21.914112  353396 type.go:168] "Request Body" body=""
	I1213 10:31:21.914192  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:21.914465  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:22.414190  353396 type.go:168] "Request Body" body=""
	I1213 10:31:22.414276  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:22.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:22.914222  353396 type.go:168] "Request Body" body=""
	I1213 10:31:22.914304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:22.914654  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:23.021940  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:23.082413  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:23.086273  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:23.086307  353396 retry.go:31] will retry after 3.228749017s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:23.414853  353396 type.go:168] "Request Body" body=""
	I1213 10:31:23.414928  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:23.415204  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:23.415248  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:23.915086  353396 type.go:168] "Request Body" body=""
	I1213 10:31:23.915169  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:23.915500  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:24.414244  353396 type.go:168] "Request Body" body=""
	I1213 10:31:24.414323  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:24.414750  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:24.914140  353396 type.go:168] "Request Body" body=""
	I1213 10:31:24.914235  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:24.914512  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:25.414276  353396 type.go:168] "Request Body" body=""
	I1213 10:31:25.414350  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:25.414719  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:25.914418  353396 type.go:168] "Request Body" body=""
	I1213 10:31:25.914503  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:25.914851  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:25.914921  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:26.315317  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:26.370308  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:26.374436  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:26.374468  353396 retry.go:31] will retry after 6.181513775s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:26.414616  353396 type.go:168] "Request Body" body=""
	I1213 10:31:26.414702  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:26.414956  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:26.914223  353396 type.go:168] "Request Body" body=""
	I1213 10:31:26.914299  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:26.914631  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:27.414210  353396 type.go:168] "Request Body" body=""
	I1213 10:31:27.414287  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:27.414626  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:27.914667  353396 type.go:168] "Request Body" body=""
	I1213 10:31:27.914756  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:27.915024  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:27.915076  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:28.360839  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:28.414331  353396 type.go:168] "Request Body" body=""
	I1213 10:31:28.414406  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:28.414626  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:28.418709  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:28.418758  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:28.418778  353396 retry.go:31] will retry after 9.214302946s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:28.914367  353396 type.go:168] "Request Body" body=""
	I1213 10:31:28.914492  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:28.914860  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:29.414102  353396 type.go:168] "Request Body" body=""
	I1213 10:31:29.414175  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:29.414432  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:29.914164  353396 type.go:168] "Request Body" body=""
	I1213 10:31:29.914249  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:29.914544  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:30.414171  353396 type.go:168] "Request Body" body=""
	I1213 10:31:30.414256  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:30.414595  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:30.414647  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:30.914147  353396 type.go:168] "Request Body" body=""
	I1213 10:31:30.914252  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:30.914572  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:31.414262  353396 type.go:168] "Request Body" body=""
	I1213 10:31:31.414347  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:31.414732  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:31.914303  353396 type.go:168] "Request Body" body=""
	I1213 10:31:31.914387  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:31.914757  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:32.415148  353396 type.go:168] "Request Body" body=""
	I1213 10:31:32.415224  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:32.415495  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:32.415554  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:32.557021  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:32.617384  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:32.617431  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:32.617463  353396 retry.go:31] will retry after 16.934984193s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:32.914304  353396 type.go:168] "Request Body" body=""
	I1213 10:31:32.914388  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:32.914742  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:33.414206  353396 type.go:168] "Request Body" body=""
	I1213 10:31:33.414289  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:33.414637  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:33.914139  353396 type.go:168] "Request Body" body=""
	I1213 10:31:33.914219  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:33.914504  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:34.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:31:34.414324  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:34.414665  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:34.914262  353396 type.go:168] "Request Body" body=""
	I1213 10:31:34.914338  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:34.914682  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:34.914754  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:35.414981  353396 type.go:168] "Request Body" body=""
	I1213 10:31:35.415048  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:35.415330  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:35.915144  353396 type.go:168] "Request Body" body=""
	I1213 10:31:35.915224  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:35.915612  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:36.414208  353396 type.go:168] "Request Body" body=""
	I1213 10:31:36.414294  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:36.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:36.914192  353396 type.go:168] "Request Body" body=""
	I1213 10:31:36.914277  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:36.914578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:37.414237  353396 type.go:168] "Request Body" body=""
	I1213 10:31:37.414313  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:37.414629  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:37.414735  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:37.633334  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:37.695165  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:37.698650  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:37.698681  353396 retry.go:31] will retry after 9.333447966s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:37.915161  353396 type.go:168] "Request Body" body=""
	I1213 10:31:37.915240  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:37.915589  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:38.414123  353396 type.go:168] "Request Body" body=""
	I1213 10:31:38.414195  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:38.414520  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:38.914231  353396 type.go:168] "Request Body" body=""
	I1213 10:31:38.914310  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:38.914622  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:39.414370  353396 type.go:168] "Request Body" body=""
	I1213 10:31:39.414450  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:39.414771  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:39.414825  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:39.914151  353396 type.go:168] "Request Body" body=""
	I1213 10:31:39.914247  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:39.914597  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:40.414232  353396 type.go:168] "Request Body" body=""
	I1213 10:31:40.414305  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:40.414590  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:40.914274  353396 type.go:168] "Request Body" body=""
	I1213 10:31:40.914351  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:40.914714  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:41.414140  353396 type.go:168] "Request Body" body=""
	I1213 10:31:41.414213  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:41.414477  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:41.914180  353396 type.go:168] "Request Body" body=""
	I1213 10:31:41.914281  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:41.914609  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:41.914666  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:42.414204  353396 type.go:168] "Request Body" body=""
	I1213 10:31:42.414282  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:42.414600  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:42.914120  353396 type.go:168] "Request Body" body=""
	I1213 10:31:42.914194  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:42.914564  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:43.414289  353396 type.go:168] "Request Body" body=""
	I1213 10:31:43.414375  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:43.414737  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:43.914224  353396 type.go:168] "Request Body" body=""
	I1213 10:31:43.914304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:43.914640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:43.914712  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:44.414209  353396 type.go:168] "Request Body" body=""
	I1213 10:31:44.414295  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:44.414641  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:44.914233  353396 type.go:168] "Request Body" body=""
	I1213 10:31:44.914306  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:44.914657  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:45.414435  353396 type.go:168] "Request Body" body=""
	I1213 10:31:45.414551  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:45.414971  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:45.914734  353396 type.go:168] "Request Body" body=""
	I1213 10:31:45.914804  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:45.915154  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:45.915214  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:46.414939  353396 type.go:168] "Request Body" body=""
	I1213 10:31:46.415012  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:46.415313  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:46.915102  353396 type.go:168] "Request Body" body=""
	I1213 10:31:46.915186  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:46.915495  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:47.032831  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:47.089360  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:47.092850  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:47.092882  353396 retry.go:31] will retry after 14.257705184s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:47.414212  353396 type.go:168] "Request Body" body=""
	I1213 10:31:47.414287  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:47.414544  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:47.914676  353396 type.go:168] "Request Body" body=""
	I1213 10:31:47.914771  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:47.915126  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:48.414973  353396 type.go:168] "Request Body" body=""
	I1213 10:31:48.415048  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:48.415397  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:48.415453  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:48.914935  353396 type.go:168] "Request Body" body=""
	I1213 10:31:48.915016  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:48.915282  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:49.415024  353396 type.go:168] "Request Body" body=""
	I1213 10:31:49.415102  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:49.415400  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:49.552673  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:49.614333  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:49.614392  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:49.614413  353396 retry.go:31] will retry after 23.024485713s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:31:49.914879  353396 type.go:168] "Request Body" body=""
	I1213 10:31:49.914950  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:49.915276  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:50.415038  353396 type.go:168] "Request Body" body=""
	I1213 10:31:50.415112  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:50.415429  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:50.415489  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:50.914923  353396 type.go:168] "Request Body" body=""
	I1213 10:31:50.915005  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:50.915323  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:51.414987  353396 type.go:168] "Request Body" body=""
	I1213 10:31:51.415064  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:51.415444  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:51.915111  353396 type.go:168] "Request Body" body=""
	I1213 10:31:51.915192  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:51.915480  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:52.414202  353396 type.go:168] "Request Body" body=""
	I1213 10:31:52.414285  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:52.414620  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:52.914489  353396 type.go:168] "Request Body" body=""
	I1213 10:31:52.914562  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:52.914926  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:52.914988  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:53.414747  353396 type.go:168] "Request Body" body=""
	I1213 10:31:53.414820  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:53.415090  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:53.914866  353396 type.go:168] "Request Body" body=""
	I1213 10:31:53.914939  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:53.915273  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:54.415083  353396 type.go:168] "Request Body" body=""
	I1213 10:31:54.415160  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:54.415481  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:54.914141  353396 type.go:168] "Request Body" body=""
	I1213 10:31:54.914222  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:54.914536  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:55.414218  353396 type.go:168] "Request Body" body=""
	I1213 10:31:55.414293  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:55.414637  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:55.414730  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:55.914444  353396 type.go:168] "Request Body" body=""
	I1213 10:31:55.914529  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:55.914897  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:56.414701  353396 type.go:168] "Request Body" body=""
	I1213 10:31:56.414795  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:56.415073  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:56.914860  353396 type.go:168] "Request Body" body=""
	I1213 10:31:56.914937  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:56.915228  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:57.415017  353396 type.go:168] "Request Body" body=""
	I1213 10:31:57.415092  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:57.415406  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:57.415455  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:57.914498  353396 type.go:168] "Request Body" body=""
	I1213 10:31:57.914564  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:57.914847  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:58.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:31:58.414333  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:58.414679  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:58.914256  353396 type.go:168] "Request Body" body=""
	I1213 10:31:58.914332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:58.914624  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:59.414297  353396 type.go:168] "Request Body" body=""
	I1213 10:31:59.414370  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:59.414637  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:59.914203  353396 type.go:168] "Request Body" body=""
	I1213 10:31:59.914281  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:59.914613  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:59.914668  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:00.414938  353396 type.go:168] "Request Body" body=""
	I1213 10:32:00.415045  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:00.415391  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:00.914138  353396 type.go:168] "Request Body" body=""
	I1213 10:32:00.914218  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:00.914514  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:01.350855  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:32:01.414382  353396 type.go:168] "Request Body" body=""
	I1213 10:32:01.414452  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:01.414751  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:01.421471  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:01.421509  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:32:01.421528  353396 retry.go:31] will retry after 32.770422349s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:32:01.914172  353396 type.go:168] "Request Body" body=""
	I1213 10:32:01.914251  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:01.914603  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:02.414231  353396 type.go:168] "Request Body" body=""
	I1213 10:32:02.414337  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:02.414661  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:02.414753  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:02.914852  353396 type.go:168] "Request Body" body=""
	I1213 10:32:02.914942  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:02.915291  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:03.415068  353396 type.go:168] "Request Body" body=""
	I1213 10:32:03.415140  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:03.415560  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:03.914265  353396 type.go:168] "Request Body" body=""
	I1213 10:32:03.914365  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:03.914734  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:04.414487  353396 type.go:168] "Request Body" body=""
	I1213 10:32:04.414564  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:04.414920  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:04.414976  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:04.914751  353396 type.go:168] "Request Body" body=""
	I1213 10:32:04.914822  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:04.915267  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:05.415063  353396 type.go:168] "Request Body" body=""
	I1213 10:32:05.415138  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:05.415446  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:05.914158  353396 type.go:168] "Request Body" body=""
	I1213 10:32:05.914237  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:05.914537  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:06.414151  353396 type.go:168] "Request Body" body=""
	I1213 10:32:06.414241  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:06.414588  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:06.914189  353396 type.go:168] "Request Body" body=""
	I1213 10:32:06.914264  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:06.914626  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:06.914721  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:07.414237  353396 type.go:168] "Request Body" body=""
	I1213 10:32:07.414336  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:07.414675  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:07.914726  353396 type.go:168] "Request Body" body=""
	I1213 10:32:07.914801  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:07.915094  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:08.414945  353396 type.go:168] "Request Body" body=""
	I1213 10:32:08.415038  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:08.415395  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:08.914139  353396 type.go:168] "Request Body" body=""
	I1213 10:32:08.914221  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:08.914527  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:09.414118  353396 type.go:168] "Request Body" body=""
	I1213 10:32:09.414186  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:09.414531  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:09.414607  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:09.914205  353396 type.go:168] "Request Body" body=""
	I1213 10:32:09.914276  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:09.914632  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:10.414215  353396 type.go:168] "Request Body" body=""
	I1213 10:32:10.414292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:10.414629  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:10.914203  353396 type.go:168] "Request Body" body=""
	I1213 10:32:10.914292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:10.914593  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:11.414242  353396 type.go:168] "Request Body" body=""
	I1213 10:32:11.414348  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:11.414703  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:11.414757  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:11.914433  353396 type.go:168] "Request Body" body=""
	I1213 10:32:11.914511  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:11.914889  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:12.414571  353396 type.go:168] "Request Body" body=""
	I1213 10:32:12.414678  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:12.414978  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:12.639532  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:32:12.701723  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:12.701768  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:32:12.701788  353396 retry.go:31] will retry after 24.373252759s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:32:12.915117  353396 type.go:168] "Request Body" body=""
	I1213 10:32:12.915211  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:12.915511  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:13.414252  353396 type.go:168] "Request Body" body=""
	I1213 10:32:13.414325  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:13.414721  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:13.414794  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:13.914428  353396 type.go:168] "Request Body" body=""
	I1213 10:32:13.914518  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:13.914913  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:14.414265  353396 type.go:168] "Request Body" body=""
	I1213 10:32:14.414377  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:14.414786  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:14.914281  353396 type.go:168] "Request Body" body=""
	I1213 10:32:14.914360  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:14.914710  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:15.414252  353396 type.go:168] "Request Body" body=""
	I1213 10:32:15.414344  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:15.414630  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:15.914243  353396 type.go:168] "Request Body" body=""
	I1213 10:32:15.914331  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:15.914660  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:15.914750  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:16.414450  353396 type.go:168] "Request Body" body=""
	I1213 10:32:16.414531  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:16.414846  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:16.914165  353396 type.go:168] "Request Body" body=""
	I1213 10:32:16.914233  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:16.914541  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:17.414213  353396 type.go:168] "Request Body" body=""
	I1213 10:32:17.414341  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:17.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:17.914712  353396 type.go:168] "Request Body" body=""
	I1213 10:32:17.914803  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:17.915126  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:17.915184  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:18.414920  353396 type.go:168] "Request Body" body=""
	I1213 10:32:18.415009  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:18.415286  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:18.915162  353396 type.go:168] "Request Body" body=""
	I1213 10:32:18.915251  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:18.915598  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:19.414275  353396 type.go:168] "Request Body" body=""
	I1213 10:32:19.414357  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:19.414661  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:19.914425  353396 type.go:168] "Request Body" body=""
	I1213 10:32:19.914592  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:19.914937  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:20.414768  353396 type.go:168] "Request Body" body=""
	I1213 10:32:20.414852  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:20.415220  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:20.415278  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:20.915055  353396 type.go:168] "Request Body" body=""
	I1213 10:32:20.915156  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:20.915495  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:21.414184  353396 type.go:168] "Request Body" body=""
	I1213 10:32:21.414260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:21.414555  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:21.914250  353396 type.go:168] "Request Body" body=""
	I1213 10:32:21.914326  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:21.914677  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:22.414287  353396 type.go:168] "Request Body" body=""
	I1213 10:32:22.414370  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:22.414741  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:22.914735  353396 type.go:168] "Request Body" body=""
	I1213 10:32:22.914804  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:22.915060  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:22.915107  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:23.414877  353396 type.go:168] "Request Body" body=""
	I1213 10:32:23.414953  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:23.415252  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:23.915036  353396 type.go:168] "Request Body" body=""
	I1213 10:32:23.915115  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:23.915451  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:24.415135  353396 type.go:168] "Request Body" body=""
	I1213 10:32:24.415211  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:24.415473  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:24.914198  353396 type.go:168] "Request Body" body=""
	I1213 10:32:24.914282  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:24.914640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:25.414436  353396 type.go:168] "Request Body" body=""
	I1213 10:32:25.414514  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:25.414854  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:25.414914  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:25.914152  353396 type.go:168] "Request Body" body=""
	I1213 10:32:25.914219  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:25.914483  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:26.414214  353396 type.go:168] "Request Body" body=""
	I1213 10:32:26.414314  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:26.414636  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:26.914231  353396 type.go:168] "Request Body" body=""
	I1213 10:32:26.914307  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:26.914637  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:27.414336  353396 type.go:168] "Request Body" body=""
	I1213 10:32:27.414402  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:27.414666  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:27.914790  353396 type.go:168] "Request Body" body=""
	I1213 10:32:27.914883  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:27.915207  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:27.915256  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:28.414990  353396 type.go:168] "Request Body" body=""
	I1213 10:32:28.415074  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:28.415436  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:28.915099  353396 type.go:168] "Request Body" body=""
	I1213 10:32:28.915173  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:28.915437  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:29.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:32:29.414250  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:29.414561  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:29.914302  353396 type.go:168] "Request Body" body=""
	I1213 10:32:29.914399  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:29.914733  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:30.414157  353396 type.go:168] "Request Body" body=""
	I1213 10:32:30.414241  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:30.414552  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:30.414604  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:30.914233  353396 type.go:168] "Request Body" body=""
	I1213 10:32:30.914307  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:30.914656  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:31.414263  353396 type.go:168] "Request Body" body=""
	I1213 10:32:31.414357  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:31.414708  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:31.914203  353396 type.go:168] "Request Body" body=""
	I1213 10:32:31.914273  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:31.914531  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:32.414222  353396 type.go:168] "Request Body" body=""
	I1213 10:32:32.414304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:32.414640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:32.414727  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:32.914510  353396 type.go:168] "Request Body" body=""
	I1213 10:32:32.914599  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:32.914973  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:33.414825  353396 type.go:168] "Request Body" body=""
	I1213 10:32:33.414915  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:33.415280  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:33.915101  353396 type.go:168] "Request Body" body=""
	I1213 10:32:33.915178  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:33.915518  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:34.192937  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:32:34.265284  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:34.265320  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:34.265405  353396 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:32:34.414970  353396 type.go:168] "Request Body" body=""
	I1213 10:32:34.415052  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:34.415423  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:34.415491  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:34.914214  353396 type.go:168] "Request Body" body=""
	I1213 10:32:34.914301  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:34.914655  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:35.414268  353396 type.go:168] "Request Body" body=""
	I1213 10:32:35.414356  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:35.414678  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:35.914239  353396 type.go:168] "Request Body" body=""
	I1213 10:32:35.914322  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:35.914704  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:36.414407  353396 type.go:168] "Request Body" body=""
	I1213 10:32:36.414485  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:36.414823  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:36.914200  353396 type.go:168] "Request Body" body=""
	I1213 10:32:36.914292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:36.914625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:36.914719  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:37.076016  353396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:32:37.141132  353396 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:37.141183  353396 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:32:37.141286  353396 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:32:37.146231  353396 out.go:179] * Enabled addons: 
	I1213 10:32:37.149102  353396 addons.go:530] duration metric: took 1m29.709445532s for enable addons: enabled=[]
	I1213 10:32:37.414592  353396 type.go:168] "Request Body" body=""
	I1213 10:32:37.414736  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:37.415128  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:37.914163  353396 type.go:168] "Request Body" body=""
	I1213 10:32:37.914246  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:37.914580  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:38.414336  353396 type.go:168] "Request Body" body=""
	I1213 10:32:38.414415  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:38.414780  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:38.914239  353396 type.go:168] "Request Body" body=""
	I1213 10:32:38.914317  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:38.914675  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:38.914752  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:39.414390  353396 type.go:168] "Request Body" body=""
	I1213 10:32:39.414462  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:39.414811  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:39.914220  353396 type.go:168] "Request Body" body=""
	I1213 10:32:39.914296  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:39.914620  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:40.414231  353396 type.go:168] "Request Body" body=""
	I1213 10:32:40.414307  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:40.414622  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:40.914193  353396 type.go:168] "Request Body" body=""
	I1213 10:32:40.914271  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:40.914548  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:41.414251  353396 type.go:168] "Request Body" body=""
	I1213 10:32:41.414348  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:41.414708  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:41.414763  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:41.914229  353396 type.go:168] "Request Body" body=""
	I1213 10:32:41.914327  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:41.914643  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:42.414161  353396 type.go:168] "Request Body" body=""
	I1213 10:32:42.414248  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:42.414516  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:42.914567  353396 type.go:168] "Request Body" body=""
	I1213 10:32:42.914643  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:42.914974  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:43.414788  353396 type.go:168] "Request Body" body=""
	I1213 10:32:43.414863  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:43.415192  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:43.415248  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:43.915667  353396 type.go:168] "Request Body" body=""
	I1213 10:32:43.915743  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:43.916016  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:44.414833  353396 type.go:168] "Request Body" body=""
	I1213 10:32:44.414913  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:44.415264  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:44.915103  353396 type.go:168] "Request Body" body=""
	I1213 10:32:44.915182  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:44.915522  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:45.414185  353396 type.go:168] "Request Body" body=""
	I1213 10:32:45.414262  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:45.414578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:45.914231  353396 type.go:168] "Request Body" body=""
	I1213 10:32:45.914307  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:45.914655  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:45.914730  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:46.414248  353396 type.go:168] "Request Body" body=""
	I1213 10:32:46.414348  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:46.414706  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:46.914404  353396 type.go:168] "Request Body" body=""
	I1213 10:32:46.914482  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:46.914848  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:47.414245  353396 type.go:168] "Request Body" body=""
	I1213 10:32:47.414332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:47.414670  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:47.915115  353396 type.go:168] "Request Body" body=""
	I1213 10:32:47.915188  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:47.915496  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:47.915548  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:48.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:32:48.414231  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:48.414501  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:48.914202  353396 type.go:168] "Request Body" body=""
	I1213 10:32:48.914276  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:48.914656  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:49.414387  353396 type.go:168] "Request Body" body=""
	I1213 10:32:49.414468  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:49.414814  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:49.914540  353396 type.go:168] "Request Body" body=""
	I1213 10:32:49.914615  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:49.914986  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:50.414789  353396 type.go:168] "Request Body" body=""
	I1213 10:32:50.414867  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:50.415215  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:50.415272  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:50.915036  353396 type.go:168] "Request Body" body=""
	I1213 10:32:50.915111  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:50.915455  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:51.414115  353396 type.go:168] "Request Body" body=""
	I1213 10:32:51.414190  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:51.414454  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:51.914146  353396 type.go:168] "Request Body" body=""
	I1213 10:32:51.914227  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:51.914572  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:52.414290  353396 type.go:168] "Request Body" body=""
	I1213 10:32:52.414382  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:52.414734  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:52.914517  353396 type.go:168] "Request Body" body=""
	I1213 10:32:52.914591  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:52.914875  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:52.914926  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:53.414207  353396 type.go:168] "Request Body" body=""
	I1213 10:32:53.414292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:53.414618  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:53.914425  353396 type.go:168] "Request Body" body=""
	I1213 10:32:53.914515  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:53.914900  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:54.414164  353396 type.go:168] "Request Body" body=""
	I1213 10:32:54.414246  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:54.414585  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:54.915092  353396 type.go:168] "Request Body" body=""
	I1213 10:32:54.915167  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:54.915487  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:54.915545  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:55.414202  353396 type.go:168] "Request Body" body=""
	I1213 10:32:55.414280  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:55.414623  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:55.914337  353396 type.go:168] "Request Body" body=""
	I1213 10:32:55.914403  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:55.914665  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:56.415120  353396 type.go:168] "Request Body" body=""
	I1213 10:32:56.415206  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:56.415536  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:56.914238  353396 type.go:168] "Request Body" body=""
	I1213 10:32:56.914316  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:56.914647  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:57.414233  353396 type.go:168] "Request Body" body=""
	I1213 10:32:57.414314  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:57.414566  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:57.414610  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:57.914675  353396 type.go:168] "Request Body" body=""
	I1213 10:32:57.914760  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:57.915078  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:58.414843  353396 type.go:168] "Request Body" body=""
	I1213 10:32:58.414921  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:58.415260  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:58.914928  353396 type.go:168] "Request Body" body=""
	I1213 10:32:58.914994  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:58.915260  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:59.414997  353396 type.go:168] "Request Body" body=""
	I1213 10:32:59.415070  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:59.415409  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:59.415463  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:59.915087  353396 type.go:168] "Request Body" body=""
	I1213 10:32:59.915169  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:59.915509  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:00.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:33:00.414313  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:00.414605  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:00.914240  353396 type.go:168] "Request Body" body=""
	I1213 10:33:00.914313  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:00.914656  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:01.414407  353396 type.go:168] "Request Body" body=""
	I1213 10:33:01.414488  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:01.414812  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:01.914174  353396 type.go:168] "Request Body" body=""
	I1213 10:33:01.914242  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:01.914578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:01.914642  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:02.414226  353396 type.go:168] "Request Body" body=""
	I1213 10:33:02.414314  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:02.414834  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:02.914840  353396 type.go:168] "Request Body" body=""
	I1213 10:33:02.914918  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:02.915280  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:03.415005  353396 type.go:168] "Request Body" body=""
	I1213 10:33:03.415071  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:03.415330  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:03.915080  353396 type.go:168] "Request Body" body=""
	I1213 10:33:03.915153  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:03.915513  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:03.915572  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:04.414115  353396 type.go:168] "Request Body" body=""
	I1213 10:33:04.414198  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:04.414530  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:04.914186  353396 type.go:168] "Request Body" body=""
	I1213 10:33:04.914260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:04.914545  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:05.414244  353396 type.go:168] "Request Body" body=""
	I1213 10:33:05.414331  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:05.414650  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:05.914534  353396 type.go:168] "Request Body" body=""
	I1213 10:33:05.914636  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:05.915200  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:06.414237  353396 type.go:168] "Request Body" body=""
	I1213 10:33:06.414329  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:06.414755  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:06.414814  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:06.914236  353396 type.go:168] "Request Body" body=""
	I1213 10:33:06.914317  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:06.914747  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:07.414280  353396 type.go:168] "Request Body" body=""
	I1213 10:33:07.414359  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:07.414723  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:07.914678  353396 type.go:168] "Request Body" body=""
	I1213 10:33:07.914764  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:07.915020  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:08.414786  353396 type.go:168] "Request Body" body=""
	I1213 10:33:08.414861  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:08.415237  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:08.415311  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:08.914933  353396 type.go:168] "Request Body" body=""
	I1213 10:33:08.915016  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:08.915363  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:09.415090  353396 type.go:168] "Request Body" body=""
	I1213 10:33:09.415163  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:09.415497  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:09.914209  353396 type.go:168] "Request Body" body=""
	I1213 10:33:09.914284  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:09.914628  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:10.414238  353396 type.go:168] "Request Body" body=""
	I1213 10:33:10.414322  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:10.414661  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:10.914394  353396 type.go:168] "Request Body" body=""
	I1213 10:33:10.914479  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:10.914797  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:10.914865  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:11.414499  353396 type.go:168] "Request Body" body=""
	I1213 10:33:11.414573  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:11.414931  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:11.914532  353396 type.go:168] "Request Body" body=""
	I1213 10:33:11.914611  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:11.914966  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:12.414801  353396 type.go:168] "Request Body" body=""
	I1213 10:33:12.414868  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:12.415171  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:12.915004  353396 type.go:168] "Request Body" body=""
	I1213 10:33:12.915081  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:12.915417  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:12.915470  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:13.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:33:13.414242  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:13.414579  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:13.914271  353396 type.go:168] "Request Body" body=""
	I1213 10:33:13.914343  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:13.914614  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:14.414244  353396 type.go:168] "Request Body" body=""
	I1213 10:33:14.414327  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:14.414733  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:14.914296  353396 type.go:168] "Request Body" body=""
	I1213 10:33:14.914374  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:14.914755  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:15.414445  353396 type.go:168] "Request Body" body=""
	I1213 10:33:15.414516  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:15.414826  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:15.414874  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:15.914238  353396 type.go:168] "Request Body" body=""
	I1213 10:33:15.914315  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:15.914667  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:16.414229  353396 type.go:168] "Request Body" body=""
	I1213 10:33:16.414308  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:16.414633  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:16.914167  353396 type.go:168] "Request Body" body=""
	I1213 10:33:16.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:16.914576  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:17.414281  353396 type.go:168] "Request Body" body=""
	I1213 10:33:17.414356  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:17.414750  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:17.914808  353396 type.go:168] "Request Body" body=""
	I1213 10:33:17.914886  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:17.915216  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:17.915272  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:18.414973  353396 type.go:168] "Request Body" body=""
	I1213 10:33:18.415047  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:18.415307  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:18.915151  353396 type.go:168] "Request Body" body=""
	I1213 10:33:18.915226  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:18.915625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:19.414335  353396 type.go:168] "Request Body" body=""
	I1213 10:33:19.414419  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:19.414759  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:19.914166  353396 type.go:168] "Request Body" body=""
	I1213 10:33:19.914245  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:19.914568  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:20.414186  353396 type.go:168] "Request Body" body=""
	I1213 10:33:20.414272  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:20.414597  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:20.414654  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:20.914206  353396 type.go:168] "Request Body" body=""
	I1213 10:33:20.914282  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:20.914621  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:21.414132  353396 type.go:168] "Request Body" body=""
	I1213 10:33:21.414208  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:21.414499  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:21.914197  353396 type.go:168] "Request Body" body=""
	I1213 10:33:21.914272  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:21.914622  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:22.414213  353396 type.go:168] "Request Body" body=""
	I1213 10:33:22.414288  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:22.414631  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:22.414714  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:22.914531  353396 type.go:168] "Request Body" body=""
	I1213 10:33:22.914600  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:22.914881  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:23.414582  353396 type.go:168] "Request Body" body=""
	I1213 10:33:23.414669  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:23.415069  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:23.914895  353396 type.go:168] "Request Body" body=""
	I1213 10:33:23.914973  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:23.915336  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:24.415103  353396 type.go:168] "Request Body" body=""
	I1213 10:33:24.415180  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:24.415512  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:24.415578  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:24.914219  353396 type.go:168] "Request Body" body=""
	I1213 10:33:24.914295  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:24.914635  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:25.414345  353396 type.go:168] "Request Body" body=""
	I1213 10:33:25.414421  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:25.414761  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:25.914311  353396 type.go:168] "Request Body" body=""
	I1213 10:33:25.914420  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:25.914777  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:26.414223  353396 type.go:168] "Request Body" body=""
	I1213 10:33:26.414296  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:26.414589  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:26.914215  353396 type.go:168] "Request Body" body=""
	I1213 10:33:26.914296  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:26.914594  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:26.914643  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:27.414265  353396 type.go:168] "Request Body" body=""
	I1213 10:33:27.414341  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:27.414652  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:27.914800  353396 type.go:168] "Request Body" body=""
	I1213 10:33:27.914879  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:27.915203  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:28.415013  353396 type.go:168] "Request Body" body=""
	I1213 10:33:28.415091  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:28.415415  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:28.914142  353396 type.go:168] "Request Body" body=""
	I1213 10:33:28.914234  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:28.914563  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:29.415186  353396 type.go:168] "Request Body" body=""
	I1213 10:33:29.415270  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:29.415654  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:29.415711  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:29.914406  353396 type.go:168] "Request Body" body=""
	I1213 10:33:29.914489  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:29.914881  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:30.414106  353396 type.go:168] "Request Body" body=""
	I1213 10:33:30.414182  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:30.414441  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:30.914192  353396 type.go:168] "Request Body" body=""
	I1213 10:33:30.914277  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:30.914639  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:31.414415  353396 type.go:168] "Request Body" body=""
	I1213 10:33:31.414504  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:31.414860  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:31.914558  353396 type.go:168] "Request Body" body=""
	I1213 10:33:31.914652  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:31.915115  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:31.915181  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:32.414968  353396 type.go:168] "Request Body" body=""
	I1213 10:33:32.415066  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:32.415412  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:32.914133  353396 type.go:168] "Request Body" body=""
	I1213 10:33:32.914209  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:32.914503  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:33.414182  353396 type.go:168] "Request Body" body=""
	I1213 10:33:33.414252  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:33.414521  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:33.914275  353396 type.go:168] "Request Body" body=""
	I1213 10:33:33.914356  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:33.914658  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:34.414225  353396 type.go:168] "Request Body" body=""
	I1213 10:33:34.414306  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:34.414645  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:34.414731  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:34.915069  353396 type.go:168] "Request Body" body=""
	I1213 10:33:34.915139  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:34.915398  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:35.415186  353396 type.go:168] "Request Body" body=""
	I1213 10:33:35.415276  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:35.415605  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:35.914324  353396 type.go:168] "Request Body" body=""
	I1213 10:33:35.914403  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:35.914770  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:36.414183  353396 type.go:168] "Request Body" body=""
	I1213 10:33:36.414258  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:36.414589  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:36.914281  353396 type.go:168] "Request Body" body=""
	I1213 10:33:36.914356  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:36.914674  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:36.914747  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:37.414252  353396 type.go:168] "Request Body" body=""
	I1213 10:33:37.414329  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:37.414672  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:37.914724  353396 type.go:168] "Request Body" body=""
	I1213 10:33:37.914810  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:37.915057  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:38.414913  353396 type.go:168] "Request Body" body=""
	I1213 10:33:38.414995  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:38.415346  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:38.915031  353396 type.go:168] "Request Body" body=""
	I1213 10:33:38.915152  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:38.915474  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:38.915537  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:39.414149  353396 type.go:168] "Request Body" body=""
	I1213 10:33:39.414223  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:39.414489  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:39.914229  353396 type.go:168] "Request Body" body=""
	I1213 10:33:39.914316  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:39.914708  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:40.414421  353396 type.go:168] "Request Body" body=""
	I1213 10:33:40.414505  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:40.414841  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:40.914176  353396 type.go:168] "Request Body" body=""
	I1213 10:33:40.914251  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:40.914547  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:41.414225  353396 type.go:168] "Request Body" body=""
	I1213 10:33:41.414301  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:41.414638  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:41.414716  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:41.914232  353396 type.go:168] "Request Body" body=""
	I1213 10:33:41.914312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:41.914726  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:42.414413  353396 type.go:168] "Request Body" body=""
	I1213 10:33:42.414502  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:42.414788  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:42.914738  353396 type.go:168] "Request Body" body=""
	I1213 10:33:42.914819  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:42.915151  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:43.414956  353396 type.go:168] "Request Body" body=""
	I1213 10:33:43.415050  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:43.415390  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:43.415447  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:43.914096  353396 type.go:168] "Request Body" body=""
	I1213 10:33:43.914175  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:43.914452  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:44.414189  353396 type.go:168] "Request Body" body=""
	I1213 10:33:44.414299  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:44.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:44.914233  353396 type.go:168] "Request Body" body=""
	I1213 10:33:44.914313  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:44.914681  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:45.414148  353396 type.go:168] "Request Body" body=""
	I1213 10:33:45.414250  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:45.414576  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:45.914408  353396 type.go:168] "Request Body" body=""
	I1213 10:33:45.914483  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:45.914847  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:45.914902  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:46.414598  353396 type.go:168] "Request Body" body=""
	I1213 10:33:46.414675  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:46.415085  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:46.914922  353396 type.go:168] "Request Body" body=""
	I1213 10:33:46.915000  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:46.915300  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:47.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:33:47.414249  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:47.414598  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:47.914753  353396 type.go:168] "Request Body" body=""
	I1213 10:33:47.914829  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:47.915132  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:47.915181  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:48.414845  353396 type.go:168] "Request Body" body=""
	I1213 10:33:48.414950  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:48.415268  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:48.914972  353396 type.go:168] "Request Body" body=""
	I1213 10:33:48.915042  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:48.915396  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:49.415067  353396 type.go:168] "Request Body" body=""
	I1213 10:33:49.415147  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:49.415484  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:49.914172  353396 type.go:168] "Request Body" body=""
	I1213 10:33:49.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:49.914579  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:50.414275  353396 type.go:168] "Request Body" body=""
	I1213 10:33:50.414359  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:50.414679  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:50.414752  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:50.914234  353396 type.go:168] "Request Body" body=""
	I1213 10:33:50.914312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:50.914673  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:51.414220  353396 type.go:168] "Request Body" body=""
	I1213 10:33:51.414286  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:51.414593  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:51.914230  353396 type.go:168] "Request Body" body=""
	I1213 10:33:51.914311  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:51.914660  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:52.414409  353396 type.go:168] "Request Body" body=""
	I1213 10:33:52.414499  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:52.414831  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:52.414892  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:52.914704  353396 type.go:168] "Request Body" body=""
	I1213 10:33:52.914782  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:52.915049  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:53.414824  353396 type.go:168] "Request Body" body=""
	I1213 10:33:53.414900  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:53.415223  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:53.915049  353396 type.go:168] "Request Body" body=""
	I1213 10:33:53.915127  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:53.915475  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:54.415020  353396 type.go:168] "Request Body" body=""
	I1213 10:33:54.415131  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:54.415393  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:54.415434  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:54.914119  353396 type.go:168] "Request Body" body=""
	I1213 10:33:54.914214  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:54.914516  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:55.414234  353396 type.go:168] "Request Body" body=""
	I1213 10:33:55.414310  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:55.414632  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:55.914184  353396 type.go:168] "Request Body" body=""
	I1213 10:33:55.914266  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:55.914529  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:56.414289  353396 type.go:168] "Request Body" body=""
	I1213 10:33:56.414370  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:56.414757  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:56.914479  353396 type.go:168] "Request Body" body=""
	I1213 10:33:56.914560  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:56.914914  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:56.914974  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:57.414182  353396 type.go:168] "Request Body" body=""
	I1213 10:33:57.414256  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:57.414574  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:57.914733  353396 type.go:168] "Request Body" body=""
	I1213 10:33:57.914817  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:57.915173  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:58.414963  353396 type.go:168] "Request Body" body=""
	I1213 10:33:58.415038  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:58.415384  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:58.915098  353396 type.go:168] "Request Body" body=""
	I1213 10:33:58.915166  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:58.915457  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:58.915498  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:59.414217  353396 type.go:168] "Request Body" body=""
	I1213 10:33:59.414293  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:59.414619  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:59.914358  353396 type.go:168] "Request Body" body=""
	I1213 10:33:59.914442  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:59.914849  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:00.414213  353396 type.go:168] "Request Body" body=""
	I1213 10:34:00.414339  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:00.414709  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:00.914229  353396 type.go:168] "Request Body" body=""
	I1213 10:34:00.914306  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:00.914634  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:01.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:34:01.414315  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:01.414624  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:01.414672  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:01.914168  353396 type.go:168] "Request Body" body=""
	I1213 10:34:01.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:01.914585  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:02.414240  353396 type.go:168] "Request Body" body=""
	I1213 10:34:02.414320  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:02.414671  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:02.914495  353396 type.go:168] "Request Body" body=""
	I1213 10:34:02.914572  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:02.914905  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:03.414563  353396 type.go:168] "Request Body" body=""
	I1213 10:34:03.414642  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:03.414937  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:03.414981  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:03.914802  353396 type.go:168] "Request Body" body=""
	I1213 10:34:03.914886  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:03.915200  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:04.415061  353396 type.go:168] "Request Body" body=""
	I1213 10:34:04.415173  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:04.415604  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:04.915045  353396 type.go:168] "Request Body" body=""
	I1213 10:34:04.915117  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:04.915454  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:05.414181  353396 type.go:168] "Request Body" body=""
	I1213 10:34:05.414260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:05.414598  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:05.914312  353396 type.go:168] "Request Body" body=""
	I1213 10:34:05.914397  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:05.914761  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:05.914818  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:06.414172  353396 type.go:168] "Request Body" body=""
	I1213 10:34:06.414246  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:06.414578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:06.914215  353396 type.go:168] "Request Body" body=""
	I1213 10:34:06.914294  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:06.914638  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:07.414373  353396 type.go:168] "Request Body" body=""
	I1213 10:34:07.414449  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:07.414801  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:07.914926  353396 type.go:168] "Request Body" body=""
	I1213 10:34:07.914993  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:07.915307  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:07.915360  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:08.415127  353396 type.go:168] "Request Body" body=""
	I1213 10:34:08.415205  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:08.415596  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:08.914374  353396 type.go:168] "Request Body" body=""
	I1213 10:34:08.914456  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:08.914801  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:09.414148  353396 type.go:168] "Request Body" body=""
	I1213 10:34:09.414219  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:09.414479  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:09.914227  353396 type.go:168] "Request Body" body=""
	I1213 10:34:09.914306  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:09.914661  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:10.414240  353396 type.go:168] "Request Body" body=""
	I1213 10:34:10.414319  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:10.414680  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:10.414778  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:10.918812  353396 type.go:168] "Request Body" body=""
	I1213 10:34:10.918890  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:10.919160  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:11.415030  353396 type.go:168] "Request Body" body=""
	I1213 10:34:11.415107  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:11.415436  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:11.914150  353396 type.go:168] "Request Body" body=""
	I1213 10:34:11.914232  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:11.914571  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:12.415071  353396 type.go:168] "Request Body" body=""
	I1213 10:34:12.415146  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:12.415421  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:12.415479  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:12.914213  353396 type.go:168] "Request Body" body=""
	I1213 10:34:12.914288  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:12.914622  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:13.414338  353396 type.go:168] "Request Body" body=""
	I1213 10:34:13.414421  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:13.414784  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:13.914194  353396 type.go:168] "Request Body" body=""
	I1213 10:34:13.914270  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:13.914538  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:14.414217  353396 type.go:168] "Request Body" body=""
	I1213 10:34:14.414294  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:14.414624  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:14.914210  353396 type.go:168] "Request Body" body=""
	I1213 10:34:14.914290  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:14.914590  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:14.914639  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:15.414121  353396 type.go:168] "Request Body" body=""
	I1213 10:34:15.414260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:15.414569  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:15.914203  353396 type.go:168] "Request Body" body=""
	I1213 10:34:15.914284  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:15.914613  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:16.414225  353396 type.go:168] "Request Body" body=""
	I1213 10:34:16.414308  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:16.414648  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:16.914359  353396 type.go:168] "Request Body" body=""
	I1213 10:34:16.914447  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:16.914753  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:16.914798  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:17.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:34:17.414312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:17.414646  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:17.914569  353396 type.go:168] "Request Body" body=""
	I1213 10:34:17.914646  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:17.914997  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:18.414794  353396 type.go:168] "Request Body" body=""
	I1213 10:34:18.414864  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:18.415130  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:18.914878  353396 type.go:168] "Request Body" body=""
	I1213 10:34:18.914956  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:18.915256  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:18.915309  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:19.415048  353396 type.go:168] "Request Body" body=""
	I1213 10:34:19.415124  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:19.415473  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:19.914155  353396 type.go:168] "Request Body" body=""
	I1213 10:34:19.914239  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:19.914557  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:20.414216  353396 type.go:168] "Request Body" body=""
	I1213 10:34:20.414293  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:20.414595  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:20.914298  353396 type.go:168] "Request Body" body=""
	I1213 10:34:20.914378  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:20.914742  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:21.414175  353396 type.go:168] "Request Body" body=""
	I1213 10:34:21.414247  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:21.414574  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:21.414628  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:21.914278  353396 type.go:168] "Request Body" body=""
	I1213 10:34:21.914361  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:21.914745  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:22.414284  353396 type.go:168] "Request Body" body=""
	I1213 10:34:22.414361  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:22.414747  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:22.914549  353396 type.go:168] "Request Body" body=""
	I1213 10:34:22.914626  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:22.914988  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:23.414779  353396 type.go:168] "Request Body" body=""
	I1213 10:34:23.414855  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:23.415214  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:23.415277  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:23.915088  353396 type.go:168] "Request Body" body=""
	I1213 10:34:23.915170  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:23.915507  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:24.414168  353396 type.go:168] "Request Body" body=""
	I1213 10:34:24.414241  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:24.414497  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:24.914176  353396 type.go:168] "Request Body" body=""
	I1213 10:34:24.914250  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:24.914580  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:25.414317  353396 type.go:168] "Request Body" body=""
	I1213 10:34:25.414397  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:25.414758  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:25.914443  353396 type.go:168] "Request Body" body=""
	I1213 10:34:25.914516  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:25.914878  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:25.914936  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:26.414193  353396 type.go:168] "Request Body" body=""
	I1213 10:34:26.414269  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:26.414575  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:26.914218  353396 type.go:168] "Request Body" body=""
	I1213 10:34:26.914293  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:26.914611  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:27.414157  353396 type.go:168] "Request Body" body=""
	I1213 10:34:27.414224  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:27.414475  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:27.914651  353396 type.go:168] "Request Body" body=""
	I1213 10:34:27.914747  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:27.915082  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:27.915143  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:28.414747  353396 type.go:168] "Request Body" body=""
	I1213 10:34:28.414831  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:28.415166  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:28.914918  353396 type.go:168] "Request Body" body=""
	I1213 10:34:28.914994  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:28.915317  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:29.415099  353396 type.go:168] "Request Body" body=""
	I1213 10:34:29.415182  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:29.415527  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:29.914143  353396 type.go:168] "Request Body" body=""
	I1213 10:34:29.914235  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:29.914632  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:30.414347  353396 type.go:168] "Request Body" body=""
	I1213 10:34:30.414415  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:30.414708  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:30.414755  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:30.914237  353396 type.go:168] "Request Body" body=""
	I1213 10:34:30.914320  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:30.914657  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:31.414414  353396 type.go:168] "Request Body" body=""
	I1213 10:34:31.414503  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:31.414889  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:31.914157  353396 type.go:168] "Request Body" body=""
	I1213 10:34:31.914230  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:31.914496  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:32.414209  353396 type.go:168] "Request Body" body=""
	I1213 10:34:32.414292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:32.414648  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:32.914128  353396 type.go:168] "Request Body" body=""
	I1213 10:34:32.914211  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:32.914560  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:32.914616  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:33.414256  353396 type.go:168] "Request Body" body=""
	I1213 10:34:33.414326  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:33.414617  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:33.914297  353396 type.go:168] "Request Body" body=""
	I1213 10:34:33.914377  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:33.914762  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:34.414238  353396 type.go:168] "Request Body" body=""
	I1213 10:34:34.414315  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:34.414643  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:34.914151  353396 type.go:168] "Request Body" body=""
	I1213 10:34:34.914224  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:34.914486  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:35.414223  353396 type.go:168] "Request Body" body=""
	I1213 10:34:35.414304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:35.414642  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:35.414735  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:35.914235  353396 type.go:168] "Request Body" body=""
	I1213 10:34:35.914320  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:35.914658  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:36.414261  353396 type.go:168] "Request Body" body=""
	I1213 10:34:36.414332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:36.414605  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:36.914211  353396 type.go:168] "Request Body" body=""
	I1213 10:34:36.914285  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:36.914640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:37.414211  353396 type.go:168] "Request Body" body=""
	I1213 10:34:37.414289  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:37.414584  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:37.914675  353396 type.go:168] "Request Body" body=""
	I1213 10:34:37.914757  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:37.915023  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:37.915064  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:38.414903  353396 type.go:168] "Request Body" body=""
	I1213 10:34:38.414986  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:38.415396  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:38.914137  353396 type.go:168] "Request Body" body=""
	I1213 10:34:38.914223  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:38.914580  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:39.414172  353396 type.go:168] "Request Body" body=""
	I1213 10:34:39.414253  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:39.414582  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:39.914286  353396 type.go:168] "Request Body" body=""
	I1213 10:34:39.914363  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:39.914715  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:40.414232  353396 type.go:168] "Request Body" body=""
	I1213 10:34:40.414314  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:40.414677  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:40.414753  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:40.914094  353396 type.go:168] "Request Body" body=""
	I1213 10:34:40.914175  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:40.914491  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:41.414243  353396 type.go:168] "Request Body" body=""
	I1213 10:34:41.414321  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:41.414666  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:41.914412  353396 type.go:168] "Request Body" body=""
	I1213 10:34:41.914495  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:41.914870  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:42.414297  353396 type.go:168] "Request Body" body=""
	I1213 10:34:42.414371  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:42.414633  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:42.914585  353396 type.go:168] "Request Body" body=""
	I1213 10:34:42.914668  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:42.915024  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:42.915079  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:43.414607  353396 type.go:168] "Request Body" body=""
	I1213 10:34:43.414702  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:43.415071  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:43.914792  353396 type.go:168] "Request Body" body=""
	I1213 10:34:43.914869  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:43.915208  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:44.415017  353396 type.go:168] "Request Body" body=""
	I1213 10:34:44.415093  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:44.415470  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:44.915253  353396 type.go:168] "Request Body" body=""
	I1213 10:34:44.915329  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:44.915668  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:44.915722  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:45.414372  353396 type.go:168] "Request Body" body=""
	I1213 10:34:45.414449  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:45.414746  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:45.914236  353396 type.go:168] "Request Body" body=""
	I1213 10:34:45.914316  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:45.914655  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:46.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:34:46.414322  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:46.414658  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:46.915158  353396 type.go:168] "Request Body" body=""
	I1213 10:34:46.915231  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:46.915495  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:47.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:34:47.414242  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:47.414552  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:47.414603  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:47.914533  353396 type.go:168] "Request Body" body=""
	I1213 10:34:47.914615  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:47.914992  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:48.414726  353396 type.go:168] "Request Body" body=""
	I1213 10:34:48.414795  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:48.415059  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:48.914847  353396 type.go:168] "Request Body" body=""
	I1213 10:34:48.914935  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:48.915268  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:49.415068  353396 type.go:168] "Request Body" body=""
	I1213 10:34:49.415159  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:49.415526  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:49.415582  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:49.914165  353396 type.go:168] "Request Body" body=""
	I1213 10:34:49.914239  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:49.914499  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:50.414183  353396 type.go:168] "Request Body" body=""
	I1213 10:34:50.414258  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:50.414554  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:50.914233  353396 type.go:168] "Request Body" body=""
	I1213 10:34:50.914307  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:50.914623  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:51.414141  353396 type.go:168] "Request Body" body=""
	I1213 10:34:51.414231  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:51.414525  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:51.914233  353396 type.go:168] "Request Body" body=""
	I1213 10:34:51.914311  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:51.914675  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:51.914750  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:52.414266  353396 type.go:168] "Request Body" body=""
	I1213 10:34:52.414347  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:52.414711  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:52.914454  353396 type.go:168] "Request Body" body=""
	I1213 10:34:52.914525  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:52.914819  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:53.414527  353396 type.go:168] "Request Body" body=""
	I1213 10:34:53.414603  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:53.414939  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:53.914755  353396 type.go:168] "Request Body" body=""
	I1213 10:34:53.914832  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:53.915171  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:53.915227  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:54.414953  353396 type.go:168] "Request Body" body=""
	I1213 10:34:54.415021  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:54.415337  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:54.915118  353396 type.go:168] "Request Body" body=""
	I1213 10:34:54.915194  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:54.915521  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:55.414227  353396 type.go:168] "Request Body" body=""
	I1213 10:34:55.414310  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:55.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:55.914335  353396 type.go:168] "Request Body" body=""
	I1213 10:34:55.914406  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:55.914763  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:56.414221  353396 type.go:168] "Request Body" body=""
	I1213 10:34:56.414295  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:56.414641  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:56.414726  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:56.914215  353396 type.go:168] "Request Body" body=""
	I1213 10:34:56.914290  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:56.914634  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:57.415117  353396 type.go:168] "Request Body" body=""
	I1213 10:34:57.415188  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:57.415448  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:57.914627  353396 type.go:168] "Request Body" body=""
	I1213 10:34:57.914722  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:57.915055  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:58.414842  353396 type.go:168] "Request Body" body=""
	I1213 10:34:58.414915  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:58.415239  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:58.415298  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:58.915010  353396 type.go:168] "Request Body" body=""
	I1213 10:34:58.915077  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:58.915339  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:59.414106  353396 type.go:168] "Request Body" body=""
	I1213 10:34:59.414182  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:59.414535  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:59.914218  353396 type.go:168] "Request Body" body=""
	I1213 10:34:59.914297  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:59.914630  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:00.414234  353396 type.go:168] "Request Body" body=""
	I1213 10:35:00.414329  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:00.414642  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:00.914218  353396 type.go:168] "Request Body" body=""
	I1213 10:35:00.914294  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:00.914620  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:00.914708  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:01.414284  353396 type.go:168] "Request Body" body=""
	I1213 10:35:01.414392  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:01.414774  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:01.914191  353396 type.go:168] "Request Body" body=""
	I1213 10:35:01.914265  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:01.914561  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:02.414257  353396 type.go:168] "Request Body" body=""
	I1213 10:35:02.414340  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:02.414636  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:02.914575  353396 type.go:168] "Request Body" body=""
	I1213 10:35:02.914650  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:02.914985  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:02.915031  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:03.414733  353396 type.go:168] "Request Body" body=""
	I1213 10:35:03.414804  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:03.415061  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:03.914909  353396 type.go:168] "Request Body" body=""
	I1213 10:35:03.914993  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:03.915318  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:04.415148  353396 type.go:168] "Request Body" body=""
	I1213 10:35:04.415227  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:04.415569  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:04.914256  353396 type.go:168] "Request Body" body=""
	I1213 10:35:04.914332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:04.914597  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:05.414227  353396 type.go:168] "Request Body" body=""
	I1213 10:35:05.414308  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:05.414640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:05.414721  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:05.914245  353396 type.go:168] "Request Body" body=""
	I1213 10:35:05.914320  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:05.914676  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:06.414484  353396 type.go:168] "Request Body" body=""
	I1213 10:35:06.414568  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:06.415045  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:06.914814  353396 type.go:168] "Request Body" body=""
	I1213 10:35:06.914901  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:06.915246  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:07.415065  353396 type.go:168] "Request Body" body=""
	I1213 10:35:07.415153  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:07.415494  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:07.415553  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:07.914641  353396 type.go:168] "Request Body" body=""
	I1213 10:35:07.914776  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:07.915128  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:08.414792  353396 type.go:168] "Request Body" body=""
	I1213 10:35:08.414868  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:08.415229  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:08.914906  353396 type.go:168] "Request Body" body=""
	I1213 10:35:08.914987  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:08.915375  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:09.415114  353396 type.go:168] "Request Body" body=""
	I1213 10:35:09.415185  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:09.415534  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:09.415626  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:09.914398  353396 type.go:168] "Request Body" body=""
	I1213 10:35:09.914476  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:09.914888  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:10.414634  353396 type.go:168] "Request Body" body=""
	I1213 10:35:10.414730  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:10.415080  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:10.914849  353396 type.go:168] "Request Body" body=""
	I1213 10:35:10.914926  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:10.915192  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:11.414986  353396 type.go:168] "Request Body" body=""
	I1213 10:35:11.415062  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:11.415419  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:11.915136  353396 type.go:168] "Request Body" body=""
	I1213 10:35:11.915218  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:11.915577  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:11.915629  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:12.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:35:12.414245  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:12.414563  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:12.914542  353396 type.go:168] "Request Body" body=""
	I1213 10:35:12.914628  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:12.914969  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:13.414794  353396 type.go:168] "Request Body" body=""
	I1213 10:35:13.414874  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:13.415199  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:13.914951  353396 type.go:168] "Request Body" body=""
	I1213 10:35:13.915028  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:13.915309  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:14.415142  353396 type.go:168] "Request Body" body=""
	I1213 10:35:14.415220  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:14.415591  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:14.415644  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:14.914209  353396 type.go:168] "Request Body" body=""
	I1213 10:35:14.914291  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:14.914640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:15.414142  353396 type.go:168] "Request Body" body=""
	I1213 10:35:15.414223  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:15.414500  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:15.914207  353396 type.go:168] "Request Body" body=""
	I1213 10:35:15.914282  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:15.914614  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:16.414239  353396 type.go:168] "Request Body" body=""
	I1213 10:35:16.414321  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:16.414682  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:16.914393  353396 type.go:168] "Request Body" body=""
	I1213 10:35:16.914470  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:16.914765  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:16.914810  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:17.414477  353396 type.go:168] "Request Body" body=""
	I1213 10:35:17.414566  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:17.414955  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:17.914879  353396 type.go:168] "Request Body" body=""
	I1213 10:35:17.914965  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:17.915283  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:18.414960  353396 type.go:168] "Request Body" body=""
	I1213 10:35:18.415027  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:18.415288  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:18.915145  353396 type.go:168] "Request Body" body=""
	I1213 10:35:18.915219  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:18.915532  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:18.915589  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:19.414267  353396 type.go:168] "Request Body" body=""
	I1213 10:35:19.414349  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:19.414667  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:19.914171  353396 type.go:168] "Request Body" body=""
	I1213 10:35:19.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:19.914602  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:20.414244  353396 type.go:168] "Request Body" body=""
	I1213 10:35:20.414319  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:20.414679  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:20.914267  353396 type.go:168] "Request Body" body=""
	I1213 10:35:20.914345  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:20.914701  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:21.414378  353396 type.go:168] "Request Body" body=""
	I1213 10:35:21.414447  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:21.414730  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:21.414775  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:21.914224  353396 type.go:168] "Request Body" body=""
	I1213 10:35:21.914303  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:21.914653  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:22.414385  353396 type.go:168] "Request Body" body=""
	I1213 10:35:22.414469  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:22.414833  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:22.914649  353396 type.go:168] "Request Body" body=""
	I1213 10:35:22.914736  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:22.915061  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:23.414837  353396 type.go:168] "Request Body" body=""
	I1213 10:35:23.414918  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:23.415270  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:23.415331  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:23.915146  353396 type.go:168] "Request Body" body=""
	I1213 10:35:23.915249  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:23.915638  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:24.414154  353396 type.go:168] "Request Body" body=""
	I1213 10:35:24.414230  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:24.414552  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:24.914229  353396 type.go:168] "Request Body" body=""
	I1213 10:35:24.914318  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:24.914653  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:25.414252  353396 type.go:168] "Request Body" body=""
	I1213 10:35:25.414330  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:25.414660  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:25.915091  353396 type.go:168] "Request Body" body=""
	I1213 10:35:25.915163  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:25.915423  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:25.915467  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:26.414123  353396 type.go:168] "Request Body" body=""
	I1213 10:35:26.414207  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:26.414558  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:26.914267  353396 type.go:168] "Request Body" body=""
	I1213 10:35:26.914346  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:26.914677  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:27.414161  353396 type.go:168] "Request Body" body=""
	I1213 10:35:27.414230  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:27.414491  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:27.914568  353396 type.go:168] "Request Body" body=""
	I1213 10:35:27.914650  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:27.914976  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:28.414794  353396 type.go:168] "Request Body" body=""
	I1213 10:35:28.414873  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:28.415186  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:28.415239  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:28.914787  353396 type.go:168] "Request Body" body=""
	I1213 10:35:28.914856  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:28.915120  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:29.414926  353396 type.go:168] "Request Body" body=""
	I1213 10:35:29.415009  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:29.415380  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:29.915155  353396 type.go:168] "Request Body" body=""
	I1213 10:35:29.915232  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:29.915572  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:30.414139  353396 type.go:168] "Request Body" body=""
	I1213 10:35:30.414216  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:30.414501  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:30.914198  353396 type.go:168] "Request Body" body=""
	I1213 10:35:30.914283  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:30.914624  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:30.914682  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:31.414209  353396 type.go:168] "Request Body" body=""
	I1213 10:35:31.414294  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:31.414641  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:31.915066  353396 type.go:168] "Request Body" body=""
	I1213 10:35:31.915136  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:31.915397  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:32.415120  353396 type.go:168] "Request Body" body=""
	I1213 10:35:32.415192  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:32.415523  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:32.914353  353396 type.go:168] "Request Body" body=""
	I1213 10:35:32.914437  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:32.914779  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:32.914844  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:33.415110  353396 type.go:168] "Request Body" body=""
	I1213 10:35:33.415191  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:33.415482  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:33.914210  353396 type.go:168] "Request Body" body=""
	I1213 10:35:33.914292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:33.914627  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:34.414260  353396 type.go:168] "Request Body" body=""
	I1213 10:35:34.414342  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:34.414742  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:34.914167  353396 type.go:168] "Request Body" body=""
	I1213 10:35:34.914242  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:34.914556  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:35.414226  353396 type.go:168] "Request Body" body=""
	I1213 10:35:35.414313  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:35.414666  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:35.414752  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:35.914420  353396 type.go:168] "Request Body" body=""
	I1213 10:35:35.914498  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:35.914834  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:36.414154  353396 type.go:168] "Request Body" body=""
	I1213 10:35:36.414226  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:36.414499  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:36.914218  353396 type.go:168] "Request Body" body=""
	I1213 10:35:36.914304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:36.914676  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:37.414405  353396 type.go:168] "Request Body" body=""
	I1213 10:35:37.414482  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:37.414832  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:37.414887  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:37.914713  353396 type.go:168] "Request Body" body=""
	I1213 10:35:37.914786  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:37.915049  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:38.414865  353396 type.go:168] "Request Body" body=""
	I1213 10:35:38.414946  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:38.415313  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:38.915124  353396 type.go:168] "Request Body" body=""
	I1213 10:35:38.915206  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:38.915515  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:39.414199  353396 type.go:168] "Request Body" body=""
	I1213 10:35:39.414277  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:39.414637  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:39.914230  353396 type.go:168] "Request Body" body=""
	I1213 10:35:39.914312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:39.914640  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:39.914716  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:40.414236  353396 type.go:168] "Request Body" body=""
	I1213 10:35:40.414320  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:40.414647  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:40.914170  353396 type.go:168] "Request Body" body=""
	I1213 10:35:40.914247  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:40.914571  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:41.414273  353396 type.go:168] "Request Body" body=""
	I1213 10:35:41.414349  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:41.414716  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:41.914438  353396 type.go:168] "Request Body" body=""
	I1213 10:35:41.914515  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:41.914837  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:41.914886  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:42.414105  353396 type.go:168] "Request Body" body=""
	I1213 10:35:42.414188  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:42.414457  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:42.914545  353396 type.go:168] "Request Body" body=""
	I1213 10:35:42.914625  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:42.914994  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:43.414794  353396 type.go:168] "Request Body" body=""
	I1213 10:35:43.414871  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:43.415204  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:43.914954  353396 type.go:168] "Request Body" body=""
	I1213 10:35:43.915028  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:43.915294  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:43.915335  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:44.415170  353396 type.go:168] "Request Body" body=""
	I1213 10:35:44.415252  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:44.415625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:44.914230  353396 type.go:168] "Request Body" body=""
	I1213 10:35:44.914311  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:44.914638  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:45.414189  353396 type.go:168] "Request Body" body=""
	I1213 10:35:45.414273  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:45.414545  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:45.914206  353396 type.go:168] "Request Body" body=""
	I1213 10:35:45.914285  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:45.914623  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:46.414264  353396 type.go:168] "Request Body" body=""
	I1213 10:35:46.414341  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:46.414706  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:46.414761  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:46.914394  353396 type.go:168] "Request Body" body=""
	I1213 10:35:46.914496  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:46.914842  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:47.414246  353396 type.go:168] "Request Body" body=""
	I1213 10:35:47.414321  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:47.414636  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:47.914823  353396 type.go:168] "Request Body" body=""
	I1213 10:35:47.914900  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:47.915205  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:48.414980  353396 type.go:168] "Request Body" body=""
	I1213 10:35:48.415049  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:48.415356  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:48.415416  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:48.915139  353396 type.go:168] "Request Body" body=""
	I1213 10:35:48.915222  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:48.915541  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:49.414295  353396 type.go:168] "Request Body" body=""
	I1213 10:35:49.414372  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:49.414675  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:49.914178  353396 type.go:168] "Request Body" body=""
	I1213 10:35:49.914246  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:49.914565  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:50.414236  353396 type.go:168] "Request Body" body=""
	I1213 10:35:50.414322  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:50.414646  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:50.914236  353396 type.go:168] "Request Body" body=""
	I1213 10:35:50.914312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:50.914633  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:50.914705  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:51.414174  353396 type.go:168] "Request Body" body=""
	I1213 10:35:51.414251  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:51.414515  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:51.914227  353396 type.go:168] "Request Body" body=""
	I1213 10:35:51.914303  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:51.914621  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:52.414226  353396 type.go:168] "Request Body" body=""
	I1213 10:35:52.414312  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:52.414660  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:52.914528  353396 type.go:168] "Request Body" body=""
	I1213 10:35:52.914597  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:52.914892  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:52.914936  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:53.414626  353396 type.go:168] "Request Body" body=""
	I1213 10:35:53.414743  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:53.415155  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:53.914985  353396 type.go:168] "Request Body" body=""
	I1213 10:35:53.915060  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:53.915423  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:54.414132  353396 type.go:168] "Request Body" body=""
	I1213 10:35:54.414212  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:54.414538  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:54.914221  353396 type.go:168] "Request Body" body=""
	I1213 10:35:54.914300  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:54.914639  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:55.414361  353396 type.go:168] "Request Body" body=""
	I1213 10:35:55.414442  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:55.414760  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:55.414814  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:55.914153  353396 type.go:168] "Request Body" body=""
	I1213 10:35:55.914231  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:55.914493  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:56.414257  353396 type.go:168] "Request Body" body=""
	I1213 10:35:56.414339  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:56.414657  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:56.914216  353396 type.go:168] "Request Body" body=""
	I1213 10:35:56.914293  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:56.914667  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:57.414176  353396 type.go:168] "Request Body" body=""
	I1213 10:35:57.414254  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:57.414584  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:57.914966  353396 type.go:168] "Request Body" body=""
	I1213 10:35:57.915050  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:57.915391  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:57.915453  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:58.414132  353396 type.go:168] "Request Body" body=""
	I1213 10:35:58.414215  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:58.414528  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:58.914158  353396 type.go:168] "Request Body" body=""
	I1213 10:35:58.914236  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:58.914510  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:59.414124  353396 type.go:168] "Request Body" body=""
	I1213 10:35:59.414208  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:59.414536  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:59.914263  353396 type.go:168] "Request Body" body=""
	I1213 10:35:59.914349  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:59.914758  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:00.421144  353396 type.go:168] "Request Body" body=""
	I1213 10:36:00.421250  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:00.421612  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:00.421665  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:00.914230  353396 type.go:168] "Request Body" body=""
	I1213 10:36:00.914305  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:00.914644  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:01.414215  353396 type.go:168] "Request Body" body=""
	I1213 10:36:01.414292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:01.414622  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:01.914179  353396 type.go:168] "Request Body" body=""
	I1213 10:36:01.914256  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:01.914522  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:02.414207  353396 type.go:168] "Request Body" body=""
	I1213 10:36:02.414283  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:02.414571  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:02.914503  353396 type.go:168] "Request Body" body=""
	I1213 10:36:02.914581  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:02.914941  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:02.915005  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:03.414758  353396 type.go:168] "Request Body" body=""
	I1213 10:36:03.414829  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:03.415178  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:03.914982  353396 type.go:168] "Request Body" body=""
	I1213 10:36:03.915057  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:03.915402  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:04.415064  353396 type.go:168] "Request Body" body=""
	I1213 10:36:04.415144  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:04.415523  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:04.914219  353396 type.go:168] "Request Body" body=""
	I1213 10:36:04.914298  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:04.914617  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:05.414231  353396 type.go:168] "Request Body" body=""
	I1213 10:36:05.414310  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:05.414671  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:05.414749  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:05.914422  353396 type.go:168] "Request Body" body=""
	I1213 10:36:05.914498  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:05.914864  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:06.414177  353396 type.go:168] "Request Body" body=""
	I1213 10:36:06.414262  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:06.414578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:06.914278  353396 type.go:168] "Request Body" body=""
	I1213 10:36:06.914363  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:06.914742  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:07.414300  353396 type.go:168] "Request Body" body=""
	I1213 10:36:07.414382  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:07.414720  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:07.414787  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:07.914791  353396 type.go:168] "Request Body" body=""
	I1213 10:36:07.914860  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:07.915123  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:08.414897  353396 type.go:168] "Request Body" body=""
	I1213 10:36:08.414981  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:08.415336  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:08.915032  353396 type.go:168] "Request Body" body=""
	I1213 10:36:08.915117  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:08.915466  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:09.414191  353396 type.go:168] "Request Body" body=""
	I1213 10:36:09.414260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:09.414540  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:09.914271  353396 type.go:168] "Request Body" body=""
	I1213 10:36:09.914352  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:09.914675  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:09.914752  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:10.414138  353396 type.go:168] "Request Body" body=""
	I1213 10:36:10.414216  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:10.414557  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:10.914195  353396 type.go:168] "Request Body" body=""
	I1213 10:36:10.914266  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:10.914534  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:11.414263  353396 type.go:168] "Request Body" body=""
	I1213 10:36:11.414339  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:11.414753  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:11.914459  353396 type.go:168] "Request Body" body=""
	I1213 10:36:11.914533  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:11.914890  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:11.914948  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:12.414139  353396 type.go:168] "Request Body" body=""
	I1213 10:36:12.414211  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:12.414474  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:12.914342  353396 type.go:168] "Request Body" body=""
	I1213 10:36:12.914427  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:12.914750  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:13.414215  353396 type.go:168] "Request Body" body=""
	I1213 10:36:13.414295  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:13.414650  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:13.914372  353396 type.go:168] "Request Body" body=""
	I1213 10:36:13.914451  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:13.914752  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:14.414251  353396 type.go:168] "Request Body" body=""
	I1213 10:36:14.414328  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:14.414656  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:14.414721  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:14.914256  353396 type.go:168] "Request Body" body=""
	I1213 10:36:14.914328  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:14.914611  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:15.415149  353396 type.go:168] "Request Body" body=""
	I1213 10:36:15.415221  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:15.415540  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:15.914232  353396 type.go:168] "Request Body" body=""
	I1213 10:36:15.914308  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:15.914678  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:16.414245  353396 type.go:168] "Request Body" body=""
	I1213 10:36:16.414325  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:16.414657  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:16.914285  353396 type.go:168] "Request Body" body=""
	I1213 10:36:16.914367  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:16.914649  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:16.914725  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:17.414244  353396 type.go:168] "Request Body" body=""
	I1213 10:36:17.414333  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:17.414644  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:17.914739  353396 type.go:168] "Request Body" body=""
	I1213 10:36:17.914821  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:17.915139  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:18.414875  353396 type.go:168] "Request Body" body=""
	I1213 10:36:18.414955  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:18.415226  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:18.915006  353396 type.go:168] "Request Body" body=""
	I1213 10:36:18.915082  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:18.915415  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:18.915472  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:19.415096  353396 type.go:168] "Request Body" body=""
	I1213 10:36:19.415183  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:19.415488  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:19.914201  353396 type.go:168] "Request Body" body=""
	I1213 10:36:19.914273  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:19.914619  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:20.414338  353396 type.go:168] "Request Body" body=""
	I1213 10:36:20.414409  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:20.414746  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:20.914260  353396 type.go:168] "Request Body" body=""
	I1213 10:36:20.914335  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:20.914704  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:21.414252  353396 type.go:168] "Request Body" body=""
	I1213 10:36:21.414338  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:21.414656  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:21.414724  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:21.914251  353396 type.go:168] "Request Body" body=""
	I1213 10:36:21.914328  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:21.914668  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:22.414268  353396 type.go:168] "Request Body" body=""
	I1213 10:36:22.414350  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:22.414680  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:22.914474  353396 type.go:168] "Request Body" body=""
	I1213 10:36:22.914553  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:22.914836  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:23.414235  353396 type.go:168] "Request Body" body=""
	I1213 10:36:23.414326  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:23.414670  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:23.414743  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:23.914266  353396 type.go:168] "Request Body" body=""
	I1213 10:36:23.914367  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:23.914763  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:24.414152  353396 type.go:168] "Request Body" body=""
	I1213 10:36:24.414223  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:24.414481  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:24.914197  353396 type.go:168] "Request Body" body=""
	I1213 10:36:24.914304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:24.914663  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:25.414263  353396 type.go:168] "Request Body" body=""
	I1213 10:36:25.414339  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:25.414676  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:25.914948  353396 type.go:168] "Request Body" body=""
	I1213 10:36:25.915020  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:25.915277  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:25.915318  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:26.415116  353396 type.go:168] "Request Body" body=""
	I1213 10:36:26.415208  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:26.415550  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:26.914250  353396 type.go:168] "Request Body" body=""
	I1213 10:36:26.914329  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:26.914612  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:27.414291  353396 type.go:168] "Request Body" body=""
	I1213 10:36:27.414364  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:27.414625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:27.914739  353396 type.go:168] "Request Body" body=""
	I1213 10:36:27.914816  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:27.915095  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:28.414897  353396 type.go:168] "Request Body" body=""
	I1213 10:36:28.414982  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:28.415303  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:28.415358  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:28.915084  353396 type.go:168] "Request Body" body=""
	I1213 10:36:28.915156  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:28.915451  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:29.414204  353396 type.go:168] "Request Body" body=""
	I1213 10:36:29.414283  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:29.414602  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:29.914216  353396 type.go:168] "Request Body" body=""
	I1213 10:36:29.914292  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:29.914661  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:30.414927  353396 type.go:168] "Request Body" body=""
	I1213 10:36:30.415000  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:30.415303  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:30.915117  353396 type.go:168] "Request Body" body=""
	I1213 10:36:30.915200  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:30.915511  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:30.915566  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:31.414255  353396 type.go:168] "Request Body" body=""
	I1213 10:36:31.414349  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:31.414739  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:31.914164  353396 type.go:168] "Request Body" body=""
	I1213 10:36:31.914237  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:31.914519  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:32.414229  353396 type.go:168] "Request Body" body=""
	I1213 10:36:32.414304  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:32.414647  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:32.914523  353396 type.go:168] "Request Body" body=""
	I1213 10:36:32.914604  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:32.914915  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:33.414159  353396 type.go:168] "Request Body" body=""
	I1213 10:36:33.414232  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:33.414567  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:33.414632  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:33.914300  353396 type.go:168] "Request Body" body=""
	I1213 10:36:33.914382  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:33.914670  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:34.414374  353396 type.go:168] "Request Body" body=""
	I1213 10:36:34.414451  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:34.414727  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:34.914184  353396 type.go:168] "Request Body" body=""
	I1213 10:36:34.914264  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:34.914587  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:35.414286  353396 type.go:168] "Request Body" body=""
	I1213 10:36:35.414359  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:35.414670  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:35.414741  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:35.914405  353396 type.go:168] "Request Body" body=""
	I1213 10:36:35.914489  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:35.914832  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:36.415085  353396 type.go:168] "Request Body" body=""
	I1213 10:36:36.415160  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:36.415449  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:36.914164  353396 type.go:168] "Request Body" body=""
	I1213 10:36:36.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:36.914585  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:37.414308  353396 type.go:168] "Request Body" body=""
	I1213 10:36:37.414384  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:37.414780  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:37.414840  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:37.914758  353396 type.go:168] "Request Body" body=""
	I1213 10:36:37.914831  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:37.915157  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:38.414970  353396 type.go:168] "Request Body" body=""
	I1213 10:36:38.415052  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:38.415405  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:38.915122  353396 type.go:168] "Request Body" body=""
	I1213 10:36:38.915210  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:38.915558  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:39.414163  353396 type.go:168] "Request Body" body=""
	I1213 10:36:39.414237  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:39.414542  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:39.914247  353396 type.go:168] "Request Body" body=""
	I1213 10:36:39.914324  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:39.914669  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:39.914747  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:40.414415  353396 type.go:168] "Request Body" body=""
	I1213 10:36:40.414494  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:40.414850  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:40.915098  353396 type.go:168] "Request Body" body=""
	I1213 10:36:40.915172  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:40.915425  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:41.414124  353396 type.go:168] "Request Body" body=""
	I1213 10:36:41.414207  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:41.414558  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:41.914180  353396 type.go:168] "Request Body" body=""
	I1213 10:36:41.914266  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:41.914604  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:42.415138  353396 type.go:168] "Request Body" body=""
	I1213 10:36:42.415216  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:42.415488  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:42.415535  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:42.914549  353396 type.go:168] "Request Body" body=""
	I1213 10:36:42.914622  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:42.914929  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:43.414232  353396 type.go:168] "Request Body" body=""
	I1213 10:36:43.414317  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:43.414680  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:43.914384  353396 type.go:168] "Request Body" body=""
	I1213 10:36:43.914452  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:43.914730  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:44.414224  353396 type.go:168] "Request Body" body=""
	I1213 10:36:44.414302  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:44.414657  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:44.914395  353396 type.go:168] "Request Body" body=""
	I1213 10:36:44.914480  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:44.914836  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:44.914896  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:45.414191  353396 type.go:168] "Request Body" body=""
	I1213 10:36:45.414264  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:45.414567  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:45.914170  353396 type.go:168] "Request Body" body=""
	I1213 10:36:45.914244  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:45.914607  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:46.414263  353396 type.go:168] "Request Body" body=""
	I1213 10:36:46.414343  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:46.414668  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:46.914163  353396 type.go:168] "Request Body" body=""
	I1213 10:36:46.914242  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:46.914578  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:47.414274  353396 type.go:168] "Request Body" body=""
	I1213 10:36:47.414359  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:47.414709  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:47.414762  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:47.914884  353396 type.go:168] "Request Body" body=""
	I1213 10:36:47.914961  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:47.915333  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:48.415033  353396 type.go:168] "Request Body" body=""
	I1213 10:36:48.415102  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:48.415408  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:48.914142  353396 type.go:168] "Request Body" body=""
	I1213 10:36:48.914217  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:48.914551  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:49.414248  353396 type.go:168] "Request Body" body=""
	I1213 10:36:49.414332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:49.414653  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:49.914171  353396 type.go:168] "Request Body" body=""
	I1213 10:36:49.914239  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:49.914490  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:49.914533  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:50.414250  353396 type.go:168] "Request Body" body=""
	I1213 10:36:50.414332  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:50.414655  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:50.914247  353396 type.go:168] "Request Body" body=""
	I1213 10:36:50.914325  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:50.914719  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:51.415136  353396 type.go:168] "Request Body" body=""
	I1213 10:36:51.415212  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:51.415495  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:51.914194  353396 type.go:168] "Request Body" body=""
	I1213 10:36:51.914271  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:51.914606  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:51.914663  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:52.414196  353396 type.go:168] "Request Body" body=""
	I1213 10:36:52.414278  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:52.414628  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:52.914521  353396 type.go:168] "Request Body" body=""
	I1213 10:36:52.914591  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:52.914917  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:53.414620  353396 type.go:168] "Request Body" body=""
	I1213 10:36:53.414716  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:53.415008  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:53.914831  353396 type.go:168] "Request Body" body=""
	I1213 10:36:53.914908  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:53.915259  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:53.915316  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:54.415073  353396 type.go:168] "Request Body" body=""
	I1213 10:36:54.415143  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:54.415457  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:54.914176  353396 type.go:168] "Request Body" body=""
	I1213 10:36:54.914260  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:54.914603  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:55.414307  353396 type.go:168] "Request Body" body=""
	I1213 10:36:55.414386  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:55.414744  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:55.914154  353396 type.go:168] "Request Body" body=""
	I1213 10:36:55.914224  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:55.914531  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:56.414237  353396 type.go:168] "Request Body" body=""
	I1213 10:36:56.414331  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:56.414644  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:56.414728  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:56.914208  353396 type.go:168] "Request Body" body=""
	I1213 10:36:56.914282  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:56.914593  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:57.414164  353396 type.go:168] "Request Body" body=""
	I1213 10:36:57.414233  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:57.414586  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:57.914740  353396 type.go:168] "Request Body" body=""
	I1213 10:36:57.914819  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:57.915172  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:58.414966  353396 type.go:168] "Request Body" body=""
	I1213 10:36:58.415044  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:58.415365  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:36:58.415427  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:36:58.914107  353396 type.go:168] "Request Body" body=""
	I1213 10:36:58.914182  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:58.914459  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:59.414161  353396 type.go:168] "Request Body" body=""
	I1213 10:36:59.414247  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:59.414593  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:36:59.914255  353396 type.go:168] "Request Body" body=""
	I1213 10:36:59.914339  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:36:59.914625  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:00.414213  353396 type.go:168] "Request Body" body=""
	I1213 10:37:00.414303  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:00.414598  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:00.914236  353396 type.go:168] "Request Body" body=""
	I1213 10:37:00.914308  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:00.914641  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:37:00.914708  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:37:01.414238  353396 type.go:168] "Request Body" body=""
	I1213 10:37:01.414328  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:01.414724  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:01.914174  353396 type.go:168] "Request Body" body=""
	I1213 10:37:01.914261  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:01.914555  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:02.414290  353396 type.go:168] "Request Body" body=""
	I1213 10:37:02.414375  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:02.414752  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:02.914840  353396 type.go:168] "Request Body" body=""
	I1213 10:37:02.914918  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:02.915213  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:37:02.915263  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:37:03.415012  353396 type.go:168] "Request Body" body=""
	I1213 10:37:03.415090  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:03.415417  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:03.914198  353396 type.go:168] "Request Body" body=""
	I1213 10:37:03.914276  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:03.914604  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:04.414267  353396 type.go:168] "Request Body" body=""
	I1213 10:37:04.414350  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:04.414724  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:04.914155  353396 type.go:168] "Request Body" body=""
	I1213 10:37:04.914256  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:04.914593  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:05.414221  353396 type.go:168] "Request Body" body=""
	I1213 10:37:05.414327  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:05.414680  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:37:05.414769  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:37:05.914428  353396 type.go:168] "Request Body" body=""
	I1213 10:37:05.914509  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:05.914816  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:06.414144  353396 type.go:168] "Request Body" body=""
	I1213 10:37:06.414222  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:06.414490  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:06.914508  353396 type.go:168] "Request Body" body=""
	I1213 10:37:06.914592  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:06.914976  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:07.414224  353396 type.go:168] "Request Body" body=""
	I1213 10:37:07.414306  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:07.414615  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:37:07.914710  353396 type.go:168] "Request Body" body=""
	I1213 10:37:07.914821  353396 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-652709" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:37:07.915135  353396 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:37:07.915217  353396 node_ready.go:55] error getting node "functional-652709" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-652709": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:37:08.414751  353396 node_ready.go:38] duration metric: took 6m0.000751586s for node "functional-652709" to be "Ready" ...
	I1213 10:37:08.417881  353396 out.go:203] 
	W1213 10:37:08.420786  353396 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 10:37:08.420808  353396 out.go:285] * 
	W1213 10:37:08.422957  353396 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:37:08.425703  353396 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 10:37:15 functional-652709 containerd[5259]: time="2025-12-13T10:37:15.942332098Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:37:16 functional-652709 containerd[5259]: time="2025-12-13T10:37:16.961845527Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 13 10:37:16 functional-652709 containerd[5259]: time="2025-12-13T10:37:16.964100465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 13 10:37:16 functional-652709 containerd[5259]: time="2025-12-13T10:37:16.974175449Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:37:16 functional-652709 containerd[5259]: time="2025-12-13T10:37:16.975067256Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:37:17 functional-652709 containerd[5259]: time="2025-12-13T10:37:17.942354469Z" level=info msg="No images store for sha256:c6249fed01776cbcb41b36b4a4c0ab7eea746dbacf3e857d9f5cb60a67157990"
	Dec 13 10:37:17 functional-652709 containerd[5259]: time="2025-12-13T10:37:17.944548787Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-652709\""
	Dec 13 10:37:17 functional-652709 containerd[5259]: time="2025-12-13T10:37:17.952195292Z" level=info msg="ImageCreate event name:\"sha256:3e30c52a5eb43a8e5ba840b7293fbdeceebf98349701321a36a877e21e3b575a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:37:17 functional-652709 containerd[5259]: time="2025-12-13T10:37:17.952772233Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-652709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:37:18 functional-652709 containerd[5259]: time="2025-12-13T10:37:18.780385969Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 13 10:37:18 functional-652709 containerd[5259]: time="2025-12-13T10:37:18.782829258Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 13 10:37:18 functional-652709 containerd[5259]: time="2025-12-13T10:37:18.784790556Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 13 10:37:18 functional-652709 containerd[5259]: time="2025-12-13T10:37:18.796574348Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 13 10:37:19 functional-652709 containerd[5259]: time="2025-12-13T10:37:19.745952345Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 13 10:37:19 functional-652709 containerd[5259]: time="2025-12-13T10:37:19.748349159Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 13 10:37:19 functional-652709 containerd[5259]: time="2025-12-13T10:37:19.750670888Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 13 10:37:19 functional-652709 containerd[5259]: time="2025-12-13T10:37:19.760596347Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 13 10:37:19 functional-652709 containerd[5259]: time="2025-12-13T10:37:19.909948044Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 13 10:37:19 functional-652709 containerd[5259]: time="2025-12-13T10:37:19.912297375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 13 10:37:19 functional-652709 containerd[5259]: time="2025-12-13T10:37:19.919399883Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:37:19 functional-652709 containerd[5259]: time="2025-12-13T10:37:19.919749860Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:37:20 functional-652709 containerd[5259]: time="2025-12-13T10:37:20.069647350Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 13 10:37:20 functional-652709 containerd[5259]: time="2025-12-13T10:37:20.072047446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 13 10:37:20 functional-652709 containerd[5259]: time="2025-12-13T10:37:20.079456107Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:37:20 functional-652709 containerd[5259]: time="2025-12-13T10:37:20.079893502Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:24.131566    9387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:24.132364    9387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:24.134354    9387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:24.135057    9387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:24.136721    9387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +12.907693] overlayfs: idmapped layers are currently not supported
	[Dec13 09:25] overlayfs: idmapped layers are currently not supported
	[ +26.192425] overlayfs: idmapped layers are currently not supported
	[Dec13 09:26] overlayfs: idmapped layers are currently not supported
	[ +25.729788] overlayfs: idmapped layers are currently not supported
	[Dec13 09:27] overlayfs: idmapped layers are currently not supported
	[Dec13 09:28] overlayfs: idmapped layers are currently not supported
	[Dec13 09:31] overlayfs: idmapped layers are currently not supported
	[Dec13 09:32] overlayfs: idmapped layers are currently not supported
	[ +17.979093] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec13 09:43] overlayfs: idmapped layers are currently not supported
	[Dec13 09:45] overlayfs: idmapped layers are currently not supported
	[ +25.885102] overlayfs: idmapped layers are currently not supported
	[Dec13 09:46] overlayfs: idmapped layers are currently not supported
	[ +22.078149] overlayfs: idmapped layers are currently not supported
	[Dec13 09:47] overlayfs: idmapped layers are currently not supported
	[Dec13 09:48] overlayfs: idmapped layers are currently not supported
	[Dec13 09:49] overlayfs: idmapped layers are currently not supported
	[Dec13 09:51] overlayfs: idmapped layers are currently not supported
	[ +17.043564] overlayfs: idmapped layers are currently not supported
	[Dec13 09:52] overlayfs: idmapped layers are currently not supported
	[Dec13 09:53] overlayfs: idmapped layers are currently not supported
	[Dec13 10:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:19] hrtimer: interrupt took 21247146 ns
	
	
	==> kernel <==
	 10:37:24 up  3:19,  0 user,  load average: 1.00, 0.48, 0.82
	Linux functional-652709 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:37:20 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:37:21 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 13 10:37:21 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:21 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:21 functional-652709 kubelet[9166]: E1213 10:37:21.469044    9166 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:37:21 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:37:21 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:37:22 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 828.
	Dec 13 10:37:22 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:22 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:22 functional-652709 kubelet[9262]: E1213 10:37:22.226729    9262 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:37:22 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:37:22 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:37:22 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 829.
	Dec 13 10:37:22 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:22 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:22 functional-652709 kubelet[9283]: E1213 10:37:22.980801    9283 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:37:22 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:37:22 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:37:23 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 830.
	Dec 13 10:37:23 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:23 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:37:23 functional-652709 kubelet[9304]: E1213 10:37:23.736723    9304 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:37:23 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:37:23 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709: exit status 2 (351.08559ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-652709" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (736.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-652709 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1213 10:40:12.241964  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:41:48.079910  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:43:11.156204  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:45:12.240783  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:46:48.084692  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-652709 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m14.026083798s)

                                                
                                                
-- stdout --
	* [functional-652709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-652709" primary control-plane node in "functional-652709" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00099636s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-652709 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m14.027313637s for "functional-652709" cluster.
I1213 10:49:39.122114  308915 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-652709
helpers_test.go:244: (dbg) docker inspect functional-652709:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f",
	        "Created": "2025-12-13T10:22:44.366993781Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 347931,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:22:44.437030763Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/hosts",
	        "LogPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f-json.log",
	        "Name": "/functional-652709",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-652709:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-652709",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f",
	                "LowerDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095-init/diff:/var/lib/docker/overlay2/5d0ca86320f2c06765995d644a73af30cd6ddb36f5c1a2a6b1ebb24af53cdabd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/merged",
	                "UpperDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/diff",
	                "WorkDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-652709",
	                "Source": "/var/lib/docker/volumes/functional-652709/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-652709",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-652709",
	                "name.minikube.sigs.k8s.io": "functional-652709",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "52e527b5bd789a02eb7efb651200033ed4929e5fc7545e9df042d3f777cc9782",
	            "SandboxKey": "/var/run/docker/netns/52e527b5bd78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-652709": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:23:08:9e:cb:13",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "344f2b940117dadb28d1ef1328f911c0446307288fdfafebfe59f38e473f79cb",
	                    "EndpointID": "8954f96e5987202be5715e7023384fe862744778b2520bccba28c57814f0980f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-652709",
	                        "0f6101071ca2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-652709 -n functional-652709
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-652709 -n functional-652709: exit status 2 (321.735428ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-319494 image ls --format yaml --alsologtostderr                                                                                              │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ ssh     │ functional-319494 ssh pgrep buildkitd                                                                                                                   │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │                     │
	│ image   │ functional-319494 image ls --format json --alsologtostderr                                                                                              │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image   │ functional-319494 image build -t localhost/my-image:functional-319494 testdata/build --alsologtostderr                                                  │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image   │ functional-319494 image ls --format table --alsologtostderr                                                                                             │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image   │ functional-319494 image ls                                                                                                                              │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ delete  │ -p functional-319494                                                                                                                                    │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ start   │ -p functional-652709 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │                     │
	│ start   │ -p functional-652709 --alsologtostderr -v=8                                                                                                             │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:31 UTC │                     │
	│ cache   │ functional-652709 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ functional-652709 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ functional-652709 cache add registry.k8s.io/pause:latest                                                                                                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ functional-652709 cache add minikube-local-cache-test:functional-652709                                                                                 │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ functional-652709 cache delete minikube-local-cache-test:functional-652709                                                                              │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ ssh     │ functional-652709 ssh sudo crictl images                                                                                                                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ ssh     │ functional-652709 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ ssh     │ functional-652709 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │                     │
	│ cache   │ functional-652709 cache reload                                                                                                                          │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ ssh     │ functional-652709 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ kubectl │ functional-652709 kubectl -- --context functional-652709 get pods                                                                                       │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │                     │
	│ start   │ -p functional-652709 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:37:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:37:25.138350  359214 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:37:25.138465  359214 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:37:25.138469  359214 out.go:374] Setting ErrFile to fd 2...
	I1213 10:37:25.138473  359214 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:37:25.138742  359214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 10:37:25.139091  359214 out.go:368] Setting JSON to false
	I1213 10:37:25.139911  359214 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11998,"bootTime":1765610247,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 10:37:25.139964  359214 start.go:143] virtualization:  
	I1213 10:37:25.143535  359214 out.go:179] * [functional-652709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:37:25.146407  359214 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:37:25.146500  359214 notify.go:221] Checking for updates...
	I1213 10:37:25.152371  359214 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:37:25.155287  359214 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:37:25.158064  359214 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 10:37:25.162885  359214 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:37:25.165865  359214 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:37:25.169282  359214 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:37:25.169378  359214 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:37:25.203946  359214 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:37:25.204073  359214 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:37:25.282140  359214 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 10:37:25.272517516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:37:25.282233  359214 docker.go:319] overlay module found
	I1213 10:37:25.285314  359214 out.go:179] * Using the docker driver based on existing profile
	I1213 10:37:25.288091  359214 start.go:309] selected driver: docker
	I1213 10:37:25.288098  359214 start.go:927] validating driver "docker" against &{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:37:25.288215  359214 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:37:25.288310  359214 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:37:25.346233  359214 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 10:37:25.336833323 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:37:25.346649  359214 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:37:25.346672  359214 cni.go:84] Creating CNI manager for ""
	I1213 10:37:25.346746  359214 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:37:25.346788  359214 start.go:353] cluster config:
	{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:37:25.351648  359214 out.go:179] * Starting "functional-652709" primary control-plane node in "functional-652709" cluster
	I1213 10:37:25.354472  359214 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 10:37:25.357365  359214 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:37:25.360240  359214 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:37:25.360279  359214 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 10:37:25.360290  359214 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:37:25.360305  359214 cache.go:65] Caching tarball of preloaded images
	I1213 10:37:25.360390  359214 preload.go:238] Found /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 10:37:25.360398  359214 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 10:37:25.360508  359214 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/config.json ...
	I1213 10:37:25.379669  359214 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:37:25.379680  359214 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:37:25.379701  359214 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:37:25.379731  359214 start.go:360] acquireMachinesLock for functional-652709: {Name:mk6e8c40fbbb5af0bb2468340fd710875030300d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:37:25.379795  359214 start.go:364] duration metric: took 46.958µs to acquireMachinesLock for "functional-652709"
	I1213 10:37:25.379812  359214 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:37:25.379817  359214 fix.go:54] fixHost starting: 
	I1213 10:37:25.380078  359214 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:37:25.396614  359214 fix.go:112] recreateIfNeeded on functional-652709: state=Running err=<nil>
	W1213 10:37:25.396632  359214 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:37:25.399750  359214 out.go:252] * Updating the running docker "functional-652709" container ...
	I1213 10:37:25.399771  359214 machine.go:94] provisionDockerMachine start ...
	I1213 10:37:25.399844  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:25.416990  359214 main.go:143] libmachine: Using SSH client type: native
	I1213 10:37:25.417324  359214 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:37:25.417330  359214 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:37:25.566232  359214 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-652709
	
	I1213 10:37:25.566247  359214 ubuntu.go:182] provisioning hostname "functional-652709"
	I1213 10:37:25.566312  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:25.583930  359214 main.go:143] libmachine: Using SSH client type: native
	I1213 10:37:25.584239  359214 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:37:25.584247  359214 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-652709 && echo "functional-652709" | sudo tee /etc/hostname
	I1213 10:37:25.743712  359214 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-652709
	
	I1213 10:37:25.743781  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:25.761387  359214 main.go:143] libmachine: Using SSH client type: native
	I1213 10:37:25.761683  359214 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:37:25.761697  359214 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-652709' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-652709/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-652709' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:37:25.915528  359214 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:37:25.915543  359214 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 10:37:25.915567  359214 ubuntu.go:190] setting up certificates
	I1213 10:37:25.915589  359214 provision.go:84] configureAuth start
	I1213 10:37:25.915650  359214 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-652709
	I1213 10:37:25.937241  359214 provision.go:143] copyHostCerts
	I1213 10:37:25.937315  359214 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 10:37:25.937323  359214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 10:37:25.937397  359214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 10:37:25.937493  359214 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 10:37:25.937497  359214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 10:37:25.937521  359214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 10:37:25.937570  359214 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 10:37:25.937573  359214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 10:37:25.937593  359214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 10:37:25.937635  359214 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.functional-652709 san=[127.0.0.1 192.168.49.2 functional-652709 localhost minikube]
	I1213 10:37:26.244127  359214 provision.go:177] copyRemoteCerts
	I1213 10:37:26.244186  359214 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:37:26.244225  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:26.264658  359214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:37:26.370401  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:37:26.387044  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:37:26.404259  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:37:26.421389  359214 provision.go:87] duration metric: took 505.777833ms to configureAuth
	I1213 10:37:26.421407  359214 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:37:26.421614  359214 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:37:26.421620  359214 machine.go:97] duration metric: took 1.021844371s to provisionDockerMachine
	I1213 10:37:26.421627  359214 start.go:293] postStartSetup for "functional-652709" (driver="docker")
	I1213 10:37:26.421636  359214 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:37:26.421692  359214 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:37:26.421728  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:26.439115  359214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:37:26.542461  359214 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:37:26.545680  359214 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:37:26.545698  359214 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:37:26.545710  359214 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 10:37:26.545763  359214 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 10:37:26.545836  359214 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 10:37:26.545911  359214 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts -> hosts in /etc/test/nested/copy/308915
	I1213 10:37:26.545959  359214 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/308915
	I1213 10:37:26.553760  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 10:37:26.571190  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts --> /etc/test/nested/copy/308915/hosts (40 bytes)
	I1213 10:37:26.588882  359214 start.go:296] duration metric: took 167.239997ms for postStartSetup
	I1213 10:37:26.588951  359214 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:37:26.588988  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:26.606145  359214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:37:26.708907  359214 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:37:26.713681  359214 fix.go:56] duration metric: took 1.333856829s for fixHost
	I1213 10:37:26.713698  359214 start.go:83] releasing machines lock for "functional-652709", held for 1.333895015s
	I1213 10:37:26.713781  359214 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-652709
	I1213 10:37:26.733362  359214 ssh_runner.go:195] Run: cat /version.json
	I1213 10:37:26.733405  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:26.733670  359214 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:37:26.733727  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:26.755898  359214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:37:26.764378  359214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:37:26.858420  359214 ssh_runner.go:195] Run: systemctl --version
	I1213 10:37:26.952524  359214 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:37:26.956969  359214 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:37:26.957030  359214 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:37:26.964724  359214 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:37:26.964738  359214 start.go:496] detecting cgroup driver to use...
	I1213 10:37:26.964768  359214 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:37:26.964823  359214 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 10:37:26.980031  359214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:37:26.993058  359214 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:37:26.993140  359214 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:37:27.016019  359214 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:37:27.029352  359214 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:37:27.143876  359214 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:37:27.259911  359214 docker.go:234] disabling docker service ...
	I1213 10:37:27.259973  359214 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:37:27.275304  359214 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:37:27.288715  359214 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:37:27.403391  359214 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:37:27.538286  359214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:37:27.551384  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:37:27.565344  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:37:27.574020  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:37:27.583189  359214 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:37:27.583255  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:37:27.591895  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:37:27.600966  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:37:27.609996  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:37:27.618821  359214 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:37:27.626864  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:37:27.635612  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:37:27.644477  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:37:27.653477  359214 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:37:27.661005  359214 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:37:27.668365  359214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:37:27.776281  359214 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:37:27.924718  359214 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 10:37:27.924777  359214 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 10:37:27.928729  359214 start.go:564] Will wait 60s for crictl version
	I1213 10:37:27.928789  359214 ssh_runner.go:195] Run: which crictl
	I1213 10:37:27.932637  359214 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:37:27.956729  359214 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 10:37:27.956786  359214 ssh_runner.go:195] Run: containerd --version
	I1213 10:37:27.979747  359214 ssh_runner.go:195] Run: containerd --version
	I1213 10:37:28.007018  359214 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 10:37:28.009973  359214 cli_runner.go:164] Run: docker network inspect functional-652709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:37:28.026979  359214 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:37:28.034215  359214 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 10:37:28.037114  359214 kubeadm.go:884] updating cluster {Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:37:28.037277  359214 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:37:28.037366  359214 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:37:28.069735  359214 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:37:28.069748  359214 containerd.go:534] Images already preloaded, skipping extraction
	I1213 10:37:28.069804  359214 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:37:28.094782  359214 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:37:28.094795  359214 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:37:28.094801  359214 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 10:37:28.094901  359214 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-652709 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:37:28.094963  359214 ssh_runner.go:195] Run: sudo crictl info
	I1213 10:37:28.123071  359214 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 10:37:28.123096  359214 cni.go:84] Creating CNI manager for ""
	I1213 10:37:28.123104  359214 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:37:28.123112  359214 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:37:28.123134  359214 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-652709 NodeName:functional-652709 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:37:28.123244  359214 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-652709"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:37:28.123313  359214 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:37:28.131175  359214 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:37:28.131238  359214 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:37:28.138792  359214 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 10:37:28.151537  359214 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:37:28.169495  359214 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1213 10:37:28.184364  359214 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:37:28.188525  359214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:37:28.305096  359214 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:37:28.912534  359214 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709 for IP: 192.168.49.2
	I1213 10:37:28.912575  359214 certs.go:195] generating shared ca certs ...
	I1213 10:37:28.912591  359214 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:37:28.912719  359214 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 10:37:28.912771  359214 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 10:37:28.912778  359214 certs.go:257] generating profile certs ...
	I1213 10:37:28.912857  359214 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.key
	I1213 10:37:28.912917  359214 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key.86e7afd1
	I1213 10:37:28.912954  359214 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key
	I1213 10:37:28.913063  359214 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 10:37:28.913092  359214 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 10:37:28.913099  359214 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:37:28.913124  359214 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:37:28.913151  359214 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:37:28.913174  359214 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 10:37:28.913221  359214 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 10:37:28.913808  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:37:28.931820  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:37:28.949028  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:37:28.966476  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:37:28.984047  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:37:29.002075  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:37:29.020305  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:37:29.037811  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:37:29.054630  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:37:29.071547  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 10:37:29.088633  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 10:37:29.105638  359214 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:37:29.118149  359214 ssh_runner.go:195] Run: openssl version
	I1213 10:37:29.124118  359214 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:37:29.131416  359214 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:37:29.138705  359214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:37:29.142329  359214 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:37:29.142388  359214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:37:29.183023  359214 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:37:29.190485  359214 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 10:37:29.197738  359214 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 10:37:29.205192  359214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 10:37:29.209070  359214 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 10:37:29.209124  359214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 10:37:29.250234  359214 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:37:29.257744  359214 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 10:37:29.265022  359214 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 10:37:29.272593  359214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 10:37:29.276820  359214 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 10:37:29.276874  359214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 10:37:29.317834  359214 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:37:29.325126  359214 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:37:29.328844  359214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:37:29.369639  359214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:37:29.410192  359214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:37:29.467336  359214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:37:29.508158  359214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:37:29.549013  359214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:37:29.589618  359214 kubeadm.go:401] StartCluster: {Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:37:29.589715  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 10:37:29.589775  359214 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:37:29.617382  359214 cri.go:89] found id: ""
	I1213 10:37:29.617441  359214 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:37:29.625150  359214 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:37:29.625165  359214 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:37:29.625217  359214 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:37:29.632536  359214 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:37:29.633037  359214 kubeconfig.go:125] found "functional-652709" server: "https://192.168.49.2:8441"
	I1213 10:37:29.635539  359214 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:37:29.643331  359214 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 10:22:52.033435592 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 10:37:28.181843120 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 10:37:29.643344  359214 kubeadm.go:1161] stopping kube-system containers ...
	I1213 10:37:29.643355  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1213 10:37:29.643418  359214 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:37:29.681117  359214 cri.go:89] found id: ""
	I1213 10:37:29.681185  359214 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 10:37:29.700348  359214 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:37:29.708464  359214 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 13 10:26 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 13 10:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 13 10:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 13 10:27 /etc/kubernetes/scheduler.conf
	
	I1213 10:37:29.708519  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:37:29.716973  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:37:29.724972  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:37:29.725027  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:37:29.732670  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:37:29.740374  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:37:29.740426  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:37:29.747796  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:37:29.755836  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:37:29.755895  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:37:29.763121  359214 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:37:29.770676  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:37:29.815944  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:37:31.022963  359214 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.206994632s)
	I1213 10:37:31.023029  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:37:31.239388  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:37:31.313712  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:37:31.358670  359214 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:37:31.358755  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:31.859658  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:32.358989  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:32.859540  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:33.359279  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:33.859755  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:34.358874  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:34.859660  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:35.358974  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:35.859781  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:36.359545  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:36.858931  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:37.359594  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:37.858997  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:38.359204  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:38.858979  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:39.358917  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:39.859473  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:40.359538  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:40.859107  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:41.358909  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:41.859704  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:42.359845  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:42.858940  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:43.359903  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:43.859817  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:44.359835  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:44.859527  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:45.359678  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:45.859496  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:46.359291  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:46.858996  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:47.358908  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:47.859899  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:48.358923  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:48.859520  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:49.358971  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:49.859614  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:50.359594  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:50.859684  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:51.359555  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:51.859532  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:52.359643  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:52.858959  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:53.359880  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:53.859709  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:54.359771  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:54.859730  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:55.359785  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:55.858870  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:56.359649  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:56.858975  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:57.358923  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:57.858974  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:58.359777  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:58.859581  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:59.359156  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:59.858896  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:00.358974  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:00.859820  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:01.359786  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:01.858901  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:02.359740  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:02.858926  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:03.359018  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:03.859003  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:04.358882  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:04.859861  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:05.358860  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:05.859819  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:06.358836  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:06.859844  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:07.359700  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:07.859637  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:08.358985  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:08.859911  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:09.358995  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:09.859620  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:10.359502  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:10.859134  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:11.358958  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:11.859244  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:12.359094  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:12.858981  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:13.359211  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:13.859751  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:14.358846  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:14.859594  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:15.358998  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:15.859726  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:16.358944  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:16.859375  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:17.358986  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:17.859765  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:18.358918  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:18.859799  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:19.359117  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:19.859388  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:20.359631  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:20.858965  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:21.358912  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:21.858871  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:22.359799  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:22.859665  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:23.359516  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:23.859788  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:24.359018  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:24.858866  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:25.359003  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:25.859726  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:26.358952  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:26.859653  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:27.359769  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:27.859360  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:28.358958  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:28.859685  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:29.359809  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:29.859773  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:30.359871  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:30.859558  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:31.359176  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:31.359252  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:31.383827  359214 cri.go:89] found id: ""
	I1213 10:38:31.383841  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.383849  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:31.383855  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:31.383917  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:31.412267  359214 cri.go:89] found id: ""
	I1213 10:38:31.412291  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.412300  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:31.412305  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:31.412364  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:31.437736  359214 cri.go:89] found id: ""
	I1213 10:38:31.437751  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.437758  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:31.437763  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:31.437824  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:31.461791  359214 cri.go:89] found id: ""
	I1213 10:38:31.461806  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.461813  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:31.461818  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:31.461880  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:31.488695  359214 cri.go:89] found id: ""
	I1213 10:38:31.488709  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.488717  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:31.488722  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:31.488789  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:31.517230  359214 cri.go:89] found id: ""
	I1213 10:38:31.517245  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.517274  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:31.517281  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:31.517340  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:31.541920  359214 cri.go:89] found id: ""
	I1213 10:38:31.541934  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.541942  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:31.541951  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:31.541962  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:31.558143  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:31.558161  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:31.623427  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:31.614536   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.615101   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.616803   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.617190   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.619517   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:31.614536   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.615101   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.616803   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.617190   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.619517   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:31.623438  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:31.623449  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:31.686774  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:31.686794  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:31.719218  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:31.719234  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:34.280556  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:34.293171  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:34.293241  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:34.319161  359214 cri.go:89] found id: ""
	I1213 10:38:34.319176  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.319183  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:34.319189  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:34.319245  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:34.348792  359214 cri.go:89] found id: ""
	I1213 10:38:34.348806  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.348814  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:34.348819  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:34.348879  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:34.374794  359214 cri.go:89] found id: ""
	I1213 10:38:34.374809  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.374816  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:34.374822  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:34.374883  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:34.399481  359214 cri.go:89] found id: ""
	I1213 10:38:34.399496  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.399503  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:34.399509  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:34.399567  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:34.424169  359214 cri.go:89] found id: ""
	I1213 10:38:34.424184  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.424191  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:34.424196  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:34.424300  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:34.449747  359214 cri.go:89] found id: ""
	I1213 10:38:34.449762  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.449769  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:34.449775  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:34.449839  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:34.475244  359214 cri.go:89] found id: ""
	I1213 10:38:34.475259  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.475266  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:34.475274  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:34.475284  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:34.531644  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:34.531665  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:34.548876  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:34.548895  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:34.612831  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:34.605081   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.605477   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.607131   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.607458   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.609038   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:34.605081   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.605477   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.607131   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.607458   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.609038   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:34.612842  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:34.612853  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:34.677588  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:34.677607  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:37.204561  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:37.215900  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:37.215960  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:37.240644  359214 cri.go:89] found id: ""
	I1213 10:38:37.240679  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.240697  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:37.240710  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:37.240796  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:37.265154  359214 cri.go:89] found id: ""
	I1213 10:38:37.265168  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.265176  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:37.265181  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:37.265240  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:37.290309  359214 cri.go:89] found id: ""
	I1213 10:38:37.290323  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.290331  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:37.290336  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:37.290402  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:37.314207  359214 cri.go:89] found id: ""
	I1213 10:38:37.314222  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.314229  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:37.314235  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:37.314294  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:37.338622  359214 cri.go:89] found id: ""
	I1213 10:38:37.338637  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.338645  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:37.338651  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:37.338731  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:37.362866  359214 cri.go:89] found id: ""
	I1213 10:38:37.362881  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.362888  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:37.362894  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:37.362954  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:37.388313  359214 cri.go:89] found id: ""
	I1213 10:38:37.388327  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.388335  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:37.388343  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:37.388355  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:37.405018  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:37.405035  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:37.467928  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:37.459672   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.460192   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.461721   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.462120   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.463584   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:37.459672   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.460192   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.461721   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.462120   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.463584   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:37.467941  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:37.467952  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:37.536764  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:37.536793  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:37.565751  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:37.565767  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:40.124516  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:40.136075  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:40.136155  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:40.180740  359214 cri.go:89] found id: ""
	I1213 10:38:40.180755  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.180763  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:40.180771  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:40.180844  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:40.214880  359214 cri.go:89] found id: ""
	I1213 10:38:40.214894  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.214912  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:40.214918  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:40.214986  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:40.255502  359214 cri.go:89] found id: ""
	I1213 10:38:40.255516  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.255524  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:40.255529  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:40.255590  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:40.279736  359214 cri.go:89] found id: ""
	I1213 10:38:40.279750  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.279761  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:40.279766  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:40.279827  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:40.305162  359214 cri.go:89] found id: ""
	I1213 10:38:40.305186  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.305194  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:40.305199  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:40.305268  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:40.330075  359214 cri.go:89] found id: ""
	I1213 10:38:40.330089  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.330097  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:40.330103  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:40.330171  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:40.356608  359214 cri.go:89] found id: ""
	I1213 10:38:40.356623  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.356631  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:40.356639  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:40.356649  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:40.386833  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:40.386850  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:40.442503  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:40.442523  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:40.458859  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:40.458875  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:40.526393  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:40.517849   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.518498   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.520192   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.520775   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.522583   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:40.517849   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.518498   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.520192   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.520775   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.522583   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:40.526415  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:40.526425  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:43.093725  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:43.104280  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:43.104351  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:43.128552  359214 cri.go:89] found id: ""
	I1213 10:38:43.128566  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.128574  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:43.128579  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:43.128637  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:43.153838  359214 cri.go:89] found id: ""
	I1213 10:38:43.153853  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.153861  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:43.153866  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:43.153925  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:43.182604  359214 cri.go:89] found id: ""
	I1213 10:38:43.182617  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.182624  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:43.182631  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:43.182751  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:43.212454  359214 cri.go:89] found id: ""
	I1213 10:38:43.212481  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.212489  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:43.212501  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:43.212572  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:43.239973  359214 cri.go:89] found id: ""
	I1213 10:38:43.239987  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.240005  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:43.240011  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:43.240074  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:43.264733  359214 cri.go:89] found id: ""
	I1213 10:38:43.264748  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.264755  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:43.264767  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:43.264826  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:43.291333  359214 cri.go:89] found id: ""
	I1213 10:38:43.291347  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.291354  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:43.291362  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:43.291372  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:43.348037  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:43.348057  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:43.364359  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:43.364377  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:43.426788  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:43.418519   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.419245   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.420917   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.421479   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.423061   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:43.418519   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.419245   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.420917   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.421479   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.423061   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:43.426809  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:43.426819  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:43.492237  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:43.492258  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:46.019179  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:46.029376  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:46.029454  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:46.053215  359214 cri.go:89] found id: ""
	I1213 10:38:46.053229  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.053236  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:46.053242  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:46.053315  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:46.078867  359214 cri.go:89] found id: ""
	I1213 10:38:46.078882  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.078889  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:46.078895  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:46.078955  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:46.104476  359214 cri.go:89] found id: ""
	I1213 10:38:46.104490  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.104498  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:46.104503  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:46.104584  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:46.132735  359214 cri.go:89] found id: ""
	I1213 10:38:46.132750  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.132758  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:46.132763  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:46.132844  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:46.171837  359214 cri.go:89] found id: ""
	I1213 10:38:46.171852  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.171859  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:46.171865  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:46.171925  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:46.214470  359214 cri.go:89] found id: ""
	I1213 10:38:46.214484  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.214501  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:46.214508  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:46.214581  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:46.241616  359214 cri.go:89] found id: ""
	I1213 10:38:46.241631  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.241638  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:46.241646  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:46.241657  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:46.269691  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:46.269717  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:46.326434  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:46.326454  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:46.342808  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:46.342825  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:46.406446  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:46.398462   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.399218   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.400888   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.401204   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.402682   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:46.398462   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.399218   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.400888   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.401204   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.402682   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:46.406456  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:46.406466  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:48.970215  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:48.980360  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:48.980424  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:49.007836  359214 cri.go:89] found id: ""
	I1213 10:38:49.007857  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.007865  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:49.007870  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:49.007930  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:49.032102  359214 cri.go:89] found id: ""
	I1213 10:38:49.032116  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.032124  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:49.032129  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:49.032188  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:49.056548  359214 cri.go:89] found id: ""
	I1213 10:38:49.056562  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.056577  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:49.056582  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:49.056638  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:49.080172  359214 cri.go:89] found id: ""
	I1213 10:38:49.080186  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.080194  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:49.080199  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:49.080257  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:49.104358  359214 cri.go:89] found id: ""
	I1213 10:38:49.104372  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.104380  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:49.104385  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:49.104456  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:49.131026  359214 cri.go:89] found id: ""
	I1213 10:38:49.131041  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.131048  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:49.131054  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:49.131111  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:49.155850  359214 cri.go:89] found id: ""
	I1213 10:38:49.155865  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.155872  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:49.155881  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:49.155891  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:49.237398  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:49.228981   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.229481   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.231324   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.231926   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.233542   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:49.228981   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.229481   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.231324   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.231926   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.233542   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:49.237409  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:49.237422  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:49.300000  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:49.300020  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:49.330957  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:49.330973  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:49.392815  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:49.392834  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:51.909143  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:51.919406  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:51.919465  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:51.948136  359214 cri.go:89] found id: ""
	I1213 10:38:51.948150  359214 logs.go:282] 0 containers: []
	W1213 10:38:51.948157  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:51.948163  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:51.948221  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:51.972396  359214 cri.go:89] found id: ""
	I1213 10:38:51.972411  359214 logs.go:282] 0 containers: []
	W1213 10:38:51.972420  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:51.972424  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:51.972497  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:52.003416  359214 cri.go:89] found id: ""
	I1213 10:38:52.003433  359214 logs.go:282] 0 containers: []
	W1213 10:38:52.003442  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:52.003449  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:52.003533  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:52.031359  359214 cri.go:89] found id: ""
	I1213 10:38:52.031374  359214 logs.go:282] 0 containers: []
	W1213 10:38:52.031382  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:52.031387  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:52.031447  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:52.056514  359214 cri.go:89] found id: ""
	I1213 10:38:52.056529  359214 logs.go:282] 0 containers: []
	W1213 10:38:52.056536  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:52.056541  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:52.056619  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:52.085509  359214 cri.go:89] found id: ""
	I1213 10:38:52.085524  359214 logs.go:282] 0 containers: []
	W1213 10:38:52.085533  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:52.085539  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:52.085613  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:52.113117  359214 cri.go:89] found id: ""
	I1213 10:38:52.113131  359214 logs.go:282] 0 containers: []
	W1213 10:38:52.113138  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:52.113146  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:52.113157  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:52.129605  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:52.129627  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:52.198531  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:52.190917   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.191383   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.192873   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.193169   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.194579   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:52.190917   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.191383   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.192873   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.193169   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.194579   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:52.198542  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:52.198554  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:52.267617  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:52.267640  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:52.301362  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:52.301379  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:54.858319  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:54.868860  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:54.868931  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:54.895935  359214 cri.go:89] found id: ""
	I1213 10:38:54.895949  359214 logs.go:282] 0 containers: []
	W1213 10:38:54.895956  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:54.895962  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:54.896020  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:54.924712  359214 cri.go:89] found id: ""
	I1213 10:38:54.924727  359214 logs.go:282] 0 containers: []
	W1213 10:38:54.924734  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:54.924740  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:54.924807  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:54.949662  359214 cri.go:89] found id: ""
	I1213 10:38:54.949677  359214 logs.go:282] 0 containers: []
	W1213 10:38:54.949685  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:54.949690  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:54.949758  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:54.973861  359214 cri.go:89] found id: ""
	I1213 10:38:54.973876  359214 logs.go:282] 0 containers: []
	W1213 10:38:54.973883  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:54.973889  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:54.973949  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:54.999167  359214 cri.go:89] found id: ""
	I1213 10:38:54.999182  359214 logs.go:282] 0 containers: []
	W1213 10:38:54.999190  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:54.999196  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:54.999267  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:55.030614  359214 cri.go:89] found id: ""
	I1213 10:38:55.030630  359214 logs.go:282] 0 containers: []
	W1213 10:38:55.030638  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:55.030644  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:55.030764  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:55.059903  359214 cri.go:89] found id: ""
	I1213 10:38:55.059918  359214 logs.go:282] 0 containers: []
	W1213 10:38:55.059925  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:55.059933  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:55.059943  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:55.129097  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:55.129156  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:55.157699  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:55.157717  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:55.226688  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:55.226706  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:55.244093  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:55.244111  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:55.309464  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:55.300977   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.301803   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.303423   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.304086   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.305672   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:55.300977   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.301803   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.303423   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.304086   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.305672   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:57.809736  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:57.819959  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:57.820025  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:57.844184  359214 cri.go:89] found id: ""
	I1213 10:38:57.844198  359214 logs.go:282] 0 containers: []
	W1213 10:38:57.844206  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:57.844211  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:57.844270  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:57.869511  359214 cri.go:89] found id: ""
	I1213 10:38:57.869524  359214 logs.go:282] 0 containers: []
	W1213 10:38:57.869532  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:57.869553  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:57.869613  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:57.895212  359214 cri.go:89] found id: ""
	I1213 10:38:57.895226  359214 logs.go:282] 0 containers: []
	W1213 10:38:57.895234  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:57.895239  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:57.895298  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:57.919989  359214 cri.go:89] found id: ""
	I1213 10:38:57.920004  359214 logs.go:282] 0 containers: []
	W1213 10:38:57.920011  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:57.920018  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:57.920076  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:57.948250  359214 cri.go:89] found id: ""
	I1213 10:38:57.948263  359214 logs.go:282] 0 containers: []
	W1213 10:38:57.948271  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:57.948277  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:57.948334  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:57.974322  359214 cri.go:89] found id: ""
	I1213 10:38:57.974337  359214 logs.go:282] 0 containers: []
	W1213 10:38:57.974345  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:57.974350  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:57.974423  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:58.005721  359214 cri.go:89] found id: ""
	I1213 10:38:58.005737  359214 logs.go:282] 0 containers: []
	W1213 10:38:58.005747  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:58.005757  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:58.005768  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:58.064186  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:58.064207  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:58.080907  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:58.080924  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:58.146147  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:58.137210   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.137944   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.139692   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.140402   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.141981   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:58.137210   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.137944   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.139692   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.140402   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.141981   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:58.146159  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:58.146170  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:58.214235  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:58.214253  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:00.744729  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:00.755028  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:00.755086  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:00.780193  359214 cri.go:89] found id: ""
	I1213 10:39:00.780207  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.780215  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:00.780221  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:00.780293  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:00.806094  359214 cri.go:89] found id: ""
	I1213 10:39:00.806109  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.806116  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:00.806123  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:00.806190  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:00.830215  359214 cri.go:89] found id: ""
	I1213 10:39:00.830229  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.830236  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:00.830241  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:00.830298  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:00.858553  359214 cri.go:89] found id: ""
	I1213 10:39:00.858567  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.858575  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:00.858581  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:00.858638  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:00.883276  359214 cri.go:89] found id: ""
	I1213 10:39:00.883290  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.883298  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:00.883304  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:00.883366  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:00.908199  359214 cri.go:89] found id: ""
	I1213 10:39:00.908214  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.908222  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:00.908235  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:00.908292  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:00.933487  359214 cri.go:89] found id: ""
	I1213 10:39:00.933502  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.933510  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:00.933518  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:00.933529  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:00.999819  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:00.990764   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.991604   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.993277   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.993599   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.995238   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:00.990764   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.991604   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.993277   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.993599   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.995238   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:00.999831  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:00.999851  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:01.070347  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:01.070376  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:01.099348  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:01.099367  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:01.160766  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:01.160789  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:03.683134  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:03.693419  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:03.693479  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:03.724358  359214 cri.go:89] found id: ""
	I1213 10:39:03.724373  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.724380  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:03.724386  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:03.724446  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:03.749342  359214 cri.go:89] found id: ""
	I1213 10:39:03.749357  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.749365  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:03.749370  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:03.749428  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:03.777066  359214 cri.go:89] found id: ""
	I1213 10:39:03.777081  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.777088  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:03.777094  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:03.777153  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:03.802375  359214 cri.go:89] found id: ""
	I1213 10:39:03.802390  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.802397  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:03.802405  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:03.802463  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:03.828597  359214 cri.go:89] found id: ""
	I1213 10:39:03.828613  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.828620  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:03.828626  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:03.828688  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:03.854166  359214 cri.go:89] found id: ""
	I1213 10:39:03.854187  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.854195  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:03.854201  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:03.854261  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:03.879516  359214 cri.go:89] found id: ""
	I1213 10:39:03.879533  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.879540  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:03.879549  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:03.879559  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:03.936679  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:03.936700  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:03.953300  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:03.953317  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:04.029874  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:04.020037   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.021068   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.022008   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.023857   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.024567   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:04.020037   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.021068   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.022008   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.023857   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.024567   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:04.029886  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:04.029896  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:04.097622  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:04.097643  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:06.630848  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:06.641568  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:06.641629  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:06.667996  359214 cri.go:89] found id: ""
	I1213 10:39:06.668011  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.668019  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:06.668024  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:06.668090  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:06.697263  359214 cri.go:89] found id: ""
	I1213 10:39:06.697278  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.697293  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:06.697299  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:06.697359  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:06.722757  359214 cri.go:89] found id: ""
	I1213 10:39:06.722772  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.722780  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:06.722785  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:06.722844  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:06.746758  359214 cri.go:89] found id: ""
	I1213 10:39:06.746772  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.746780  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:06.746786  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:06.746845  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:06.775078  359214 cri.go:89] found id: ""
	I1213 10:39:06.775093  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.775100  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:06.775105  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:06.775164  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:06.800898  359214 cri.go:89] found id: ""
	I1213 10:39:06.800914  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.800921  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:06.800926  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:06.800983  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:06.829594  359214 cri.go:89] found id: ""
	I1213 10:39:06.829624  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.829648  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:06.829656  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:06.829666  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:06.893293  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:06.893314  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:06.921544  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:06.921562  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:06.981949  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:06.981969  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:06.998794  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:06.998816  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:07.067966  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:07.059691   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.060374   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.061914   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.062229   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.063682   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:07.059691   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.060374   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.061914   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.062229   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.063682   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:09.568245  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:09.578515  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:09.578574  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:09.604486  359214 cri.go:89] found id: ""
	I1213 10:39:09.604500  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.604507  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:09.604512  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:09.604572  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:09.628878  359214 cri.go:89] found id: ""
	I1213 10:39:09.628894  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.628902  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:09.628912  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:09.628971  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:09.654182  359214 cri.go:89] found id: ""
	I1213 10:39:09.654196  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.654204  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:09.654209  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:09.654268  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:09.679850  359214 cri.go:89] found id: ""
	I1213 10:39:09.679864  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.679871  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:09.679877  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:09.679937  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:09.708630  359214 cri.go:89] found id: ""
	I1213 10:39:09.708644  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.708651  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:09.708657  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:09.708716  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:09.732554  359214 cri.go:89] found id: ""
	I1213 10:39:09.732568  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.732575  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:09.732581  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:09.732642  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:09.757631  359214 cri.go:89] found id: ""
	I1213 10:39:09.757646  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.757654  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:09.757663  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:09.757674  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:09.816181  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:09.816203  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:09.832514  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:09.832531  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:09.897359  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:09.888543   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.889254   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.891102   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.891693   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.893450   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:09.888543   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.889254   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.891102   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.891693   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.893450   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:09.897369  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:09.897379  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:09.960943  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:09.960964  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:12.490984  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:12.501823  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:12.501893  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:12.532332  359214 cri.go:89] found id: ""
	I1213 10:39:12.532347  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.532354  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:12.532359  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:12.532419  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:12.558457  359214 cri.go:89] found id: ""
	I1213 10:39:12.558471  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.558479  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:12.558485  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:12.558545  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:12.585075  359214 cri.go:89] found id: ""
	I1213 10:39:12.585089  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.585097  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:12.585102  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:12.585160  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:12.614401  359214 cri.go:89] found id: ""
	I1213 10:39:12.614415  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.614422  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:12.614428  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:12.614486  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:12.639152  359214 cri.go:89] found id: ""
	I1213 10:39:12.639166  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.639173  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:12.639179  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:12.639240  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:12.667593  359214 cri.go:89] found id: ""
	I1213 10:39:12.667607  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.667614  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:12.667620  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:12.667681  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:12.691984  359214 cri.go:89] found id: ""
	I1213 10:39:12.691997  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.692005  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:12.692013  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:12.692024  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:12.756546  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:12.748299   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.748690   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.750244   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.750570   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.752183   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:12.748299   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.748690   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.750244   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.750570   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.752183   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:12.756556  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:12.756567  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:12.820864  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:12.820885  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:12.853253  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:12.853289  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:12.911659  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:12.911678  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:15.427988  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:15.439459  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:15.439523  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:15.476834  359214 cri.go:89] found id: ""
	I1213 10:39:15.476849  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.476856  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:15.476862  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:15.476926  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:15.501586  359214 cri.go:89] found id: ""
	I1213 10:39:15.501601  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.501609  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:15.501614  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:15.501675  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:15.526367  359214 cri.go:89] found id: ""
	I1213 10:39:15.526381  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.526399  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:15.526406  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:15.526473  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:15.551126  359214 cri.go:89] found id: ""
	I1213 10:39:15.551141  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.551148  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:15.551154  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:15.551209  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:15.576958  359214 cri.go:89] found id: ""
	I1213 10:39:15.576973  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.576990  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:15.576996  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:15.577062  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:15.601287  359214 cri.go:89] found id: ""
	I1213 10:39:15.601300  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.601308  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:15.601313  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:15.601371  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:15.628822  359214 cri.go:89] found id: ""
	I1213 10:39:15.628837  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.628844  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:15.628852  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:15.628862  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:15.644985  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:15.645002  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:15.711548  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:15.703095   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.703681   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.705285   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.705963   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.707559   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:15.703095   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.703681   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.705285   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.705963   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.707559   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:15.711559  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:15.711571  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:15.775011  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:15.775031  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:15.802522  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:15.802545  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:18.359921  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:18.369925  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:18.369992  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:18.393448  359214 cri.go:89] found id: ""
	I1213 10:39:18.393462  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.393470  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:18.393476  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:18.393532  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:18.426863  359214 cri.go:89] found id: ""
	I1213 10:39:18.426876  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.426884  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:18.426889  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:18.426946  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:18.472251  359214 cri.go:89] found id: ""
	I1213 10:39:18.472264  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.472272  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:18.472277  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:18.472333  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:18.500412  359214 cri.go:89] found id: ""
	I1213 10:39:18.500427  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.500434  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:18.500440  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:18.500500  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:18.524823  359214 cri.go:89] found id: ""
	I1213 10:39:18.524837  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.524845  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:18.524850  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:18.524908  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:18.549332  359214 cri.go:89] found id: ""
	I1213 10:39:18.549346  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.549354  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:18.549359  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:18.549417  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:18.577251  359214 cri.go:89] found id: ""
	I1213 10:39:18.577271  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.577279  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:18.577287  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:18.577299  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:18.639510  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:18.639530  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:18.677762  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:18.677777  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:18.737061  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:18.737080  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:18.753422  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:18.753439  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:18.823128  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:18.814301   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.815633   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.816172   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.817539   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.818059   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:18.814301   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.815633   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.816172   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.817539   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.818059   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:21.323418  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:21.333772  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:21.333833  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:21.368103  359214 cri.go:89] found id: ""
	I1213 10:39:21.368118  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.368125  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:21.368131  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:21.368188  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:21.392848  359214 cri.go:89] found id: ""
	I1213 10:39:21.392862  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.392870  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:21.392875  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:21.392932  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:21.426067  359214 cri.go:89] found id: ""
	I1213 10:39:21.426082  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.426089  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:21.426094  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:21.426153  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:21.453497  359214 cri.go:89] found id: ""
	I1213 10:39:21.453521  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.453529  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:21.453535  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:21.453600  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:21.486155  359214 cri.go:89] found id: ""
	I1213 10:39:21.486170  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.486187  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:21.486193  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:21.486262  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:21.512133  359214 cri.go:89] found id: ""
	I1213 10:39:21.512148  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.512155  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:21.512161  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:21.512219  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:21.536909  359214 cri.go:89] found id: ""
	I1213 10:39:21.536925  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.536932  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:21.536940  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:21.536951  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:21.564635  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:21.564651  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:21.621861  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:21.621882  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:21.638280  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:21.638297  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:21.706649  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:21.698160   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.698774   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.700554   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.701257   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.702523   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:21.698160   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.698774   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.700554   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.701257   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.702523   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:21.706660  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:21.706678  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:24.270851  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:24.281891  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:24.281959  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:24.306887  359214 cri.go:89] found id: ""
	I1213 10:39:24.306902  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.306910  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:24.306916  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:24.306989  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:24.330995  359214 cri.go:89] found id: ""
	I1213 10:39:24.331009  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.331018  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:24.331023  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:24.331079  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:24.358824  359214 cri.go:89] found id: ""
	I1213 10:39:24.358838  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.358845  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:24.358850  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:24.358907  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:24.383545  359214 cri.go:89] found id: ""
	I1213 10:39:24.383559  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.383566  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:24.383572  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:24.383628  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:24.407288  359214 cri.go:89] found id: ""
	I1213 10:39:24.407302  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.407309  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:24.407315  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:24.407374  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:24.441689  359214 cri.go:89] found id: ""
	I1213 10:39:24.441703  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.441720  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:24.441727  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:24.441796  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:24.469372  359214 cri.go:89] found id: ""
	I1213 10:39:24.469387  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.469394  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:24.469402  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:24.469418  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:24.529071  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:24.529091  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:24.545770  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:24.545786  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:24.619385  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:24.610753   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.611526   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.613120   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.613552   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.615328   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:24.610753   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.611526   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.613120   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.613552   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.615328   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:24.619395  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:24.619406  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:24.683002  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:24.683029  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:27.214048  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:27.223825  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:27.223885  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:27.249091  359214 cri.go:89] found id: ""
	I1213 10:39:27.249106  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.249114  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:27.249120  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:27.249175  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:27.274216  359214 cri.go:89] found id: ""
	I1213 10:39:27.274231  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.274238  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:27.274243  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:27.274301  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:27.306051  359214 cri.go:89] found id: ""
	I1213 10:39:27.306068  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.306076  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:27.306081  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:27.306162  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:27.329993  359214 cri.go:89] found id: ""
	I1213 10:39:27.330015  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.330022  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:27.330027  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:27.330084  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:27.357738  359214 cri.go:89] found id: ""
	I1213 10:39:27.357759  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.357766  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:27.357772  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:27.357829  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:27.383932  359214 cri.go:89] found id: ""
	I1213 10:39:27.383948  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.383955  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:27.383960  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:27.384021  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:27.408273  359214 cri.go:89] found id: ""
	I1213 10:39:27.408298  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.408306  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:27.408314  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:27.408324  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:27.473400  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:27.473421  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:27.490562  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:27.490580  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:27.560540  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:27.551714   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.552445   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.554637   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.555366   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.556555   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:27.551714   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.552445   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.554637   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.555366   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.556555   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:27.560551  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:27.560562  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:27.623676  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:27.623700  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:30.153068  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:30.164672  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:30.164745  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:30.192223  359214 cri.go:89] found id: ""
	I1213 10:39:30.192239  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.192248  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:30.192254  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:30.192336  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:30.224222  359214 cri.go:89] found id: ""
	I1213 10:39:30.224237  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.224245  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:30.224251  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:30.224319  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:30.250132  359214 cri.go:89] found id: ""
	I1213 10:39:30.250148  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.250156  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:30.250161  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:30.250232  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:30.278166  359214 cri.go:89] found id: ""
	I1213 10:39:30.278182  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.278199  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:30.278205  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:30.278271  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:30.304028  359214 cri.go:89] found id: ""
	I1213 10:39:30.304043  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.304050  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:30.304055  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:30.304112  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:30.328660  359214 cri.go:89] found id: ""
	I1213 10:39:30.328675  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.328693  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:30.328699  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:30.328767  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:30.352850  359214 cri.go:89] found id: ""
	I1213 10:39:30.352865  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.352877  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:30.352886  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:30.352896  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:30.408893  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:30.408912  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:30.428762  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:30.428779  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:30.500428  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:30.492113   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.492871   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.494609   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.495292   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.496285   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:30.492113   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.492871   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.494609   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.495292   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.496285   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:30.500438  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:30.500449  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:30.563541  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:30.563560  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:33.092955  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:33.103393  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:33.103457  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:33.128626  359214 cri.go:89] found id: ""
	I1213 10:39:33.128640  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.128647  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:33.128653  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:33.128709  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:33.156533  359214 cri.go:89] found id: ""
	I1213 10:39:33.156548  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.156555  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:33.156561  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:33.156631  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:33.181965  359214 cri.go:89] found id: ""
	I1213 10:39:33.181979  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.181987  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:33.181992  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:33.182066  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:33.210753  359214 cri.go:89] found id: ""
	I1213 10:39:33.210767  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.210775  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:33.210780  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:33.210846  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:33.236369  359214 cri.go:89] found id: ""
	I1213 10:39:33.236384  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.236391  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:33.236396  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:33.236453  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:33.261374  359214 cri.go:89] found id: ""
	I1213 10:39:33.261390  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.261397  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:33.261403  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:33.261476  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:33.286480  359214 cri.go:89] found id: ""
	I1213 10:39:33.286496  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.286512  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:33.286536  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:33.286547  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:33.344247  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:33.344268  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:33.362163  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:33.362178  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:33.431331  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:33.423097   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.423938   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.425571   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.425890   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.427375   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:33.423097   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.423938   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.425571   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.425890   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.427375   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:33.431340  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:33.431351  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:33.514221  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:33.514250  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:36.043055  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:36.053301  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:36.053366  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:36.078047  359214 cri.go:89] found id: ""
	I1213 10:39:36.078061  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.078069  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:36.078074  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:36.078135  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:36.104994  359214 cri.go:89] found id: ""
	I1213 10:39:36.105009  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.105017  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:36.105022  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:36.105083  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:36.138243  359214 cri.go:89] found id: ""
	I1213 10:39:36.138257  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.138264  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:36.138270  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:36.138331  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:36.163657  359214 cri.go:89] found id: ""
	I1213 10:39:36.163672  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.163679  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:36.163685  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:36.163744  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:36.192631  359214 cri.go:89] found id: ""
	I1213 10:39:36.192646  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.192653  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:36.192658  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:36.192715  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:36.217613  359214 cri.go:89] found id: ""
	I1213 10:39:36.217626  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.217634  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:36.217641  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:36.217699  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:36.242973  359214 cri.go:89] found id: ""
	I1213 10:39:36.242988  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.242995  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:36.243004  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:36.243015  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:36.299822  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:36.299843  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:36.316930  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:36.316947  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:36.384839  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:36.376386   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.377339   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.379075   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.379670   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.381017   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:36.376386   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.377339   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.379075   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.379670   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.381017   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:36.384850  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:36.384860  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:36.453800  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:36.453820  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:38.992805  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:39.004323  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:39.004395  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:39.029542  359214 cri.go:89] found id: ""
	I1213 10:39:39.029556  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.029564  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:39.029569  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:39.029634  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:39.058191  359214 cri.go:89] found id: ""
	I1213 10:39:39.058205  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.058212  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:39.058217  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:39.058278  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:39.082506  359214 cri.go:89] found id: ""
	I1213 10:39:39.082520  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.082527  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:39.082532  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:39.082588  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:39.107708  359214 cri.go:89] found id: ""
	I1213 10:39:39.107722  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.107729  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:39.107735  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:39.107795  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:39.134092  359214 cri.go:89] found id: ""
	I1213 10:39:39.134106  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.134114  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:39.134119  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:39.134176  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:39.159493  359214 cri.go:89] found id: ""
	I1213 10:39:39.159508  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.159516  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:39.159521  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:39.159586  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:39.185250  359214 cri.go:89] found id: ""
	I1213 10:39:39.185270  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.185278  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:39.185285  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:39.185296  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:39.212945  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:39.212964  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:39.270421  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:39.270441  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:39.287465  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:39.287483  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:39.353697  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:39.344780   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.345537   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.347125   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.347630   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.349255   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:39.344780   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.345537   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.347125   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.347630   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.349255   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:39.353707  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:39.353719  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:41.923052  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:41.933314  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:41.933380  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:41.957979  359214 cri.go:89] found id: ""
	I1213 10:39:41.957994  359214 logs.go:282] 0 containers: []
	W1213 10:39:41.958001  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:41.958006  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:41.958063  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:41.982504  359214 cri.go:89] found id: ""
	I1213 10:39:41.982519  359214 logs.go:282] 0 containers: []
	W1213 10:39:41.982527  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:41.982532  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:41.982594  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:42.034066  359214 cri.go:89] found id: ""
	I1213 10:39:42.034090  359214 logs.go:282] 0 containers: []
	W1213 10:39:42.034098  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:42.034103  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:42.034170  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:42.060660  359214 cri.go:89] found id: ""
	I1213 10:39:42.060675  359214 logs.go:282] 0 containers: []
	W1213 10:39:42.060682  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:42.060688  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:42.060760  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:42.089100  359214 cri.go:89] found id: ""
	I1213 10:39:42.089116  359214 logs.go:282] 0 containers: []
	W1213 10:39:42.089125  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:42.089131  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:42.089206  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:42.124357  359214 cri.go:89] found id: ""
	I1213 10:39:42.124373  359214 logs.go:282] 0 containers: []
	W1213 10:39:42.124382  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:42.124388  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:42.124457  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:42.154537  359214 cri.go:89] found id: ""
	I1213 10:39:42.154552  359214 logs.go:282] 0 containers: []
	W1213 10:39:42.154560  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:42.154568  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:42.154580  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:42.236098  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:42.226374   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.227371   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.229046   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.229696   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.231326   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:42.226374   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.227371   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.229046   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.229696   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.231326   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:42.236116  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:42.236128  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:42.301179  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:42.301201  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:42.331860  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:42.331876  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:42.389580  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:42.389599  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:44.907943  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:44.917971  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:44.918030  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:44.944860  359214 cri.go:89] found id: ""
	I1213 10:39:44.944876  359214 logs.go:282] 0 containers: []
	W1213 10:39:44.944883  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:44.944889  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:44.944947  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:44.969171  359214 cri.go:89] found id: ""
	I1213 10:39:44.969185  359214 logs.go:282] 0 containers: []
	W1213 10:39:44.969192  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:44.969197  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:44.969274  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:44.993953  359214 cri.go:89] found id: ""
	I1213 10:39:44.993968  359214 logs.go:282] 0 containers: []
	W1213 10:39:44.993975  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:44.993980  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:44.994036  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:45.047270  359214 cri.go:89] found id: ""
	I1213 10:39:45.047286  359214 logs.go:282] 0 containers: []
	W1213 10:39:45.047295  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:45.047308  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:45.047383  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:45.081157  359214 cri.go:89] found id: ""
	I1213 10:39:45.081173  359214 logs.go:282] 0 containers: []
	W1213 10:39:45.081182  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:45.081189  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:45.081275  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:45.121621  359214 cri.go:89] found id: ""
	I1213 10:39:45.121638  359214 logs.go:282] 0 containers: []
	W1213 10:39:45.121646  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:45.121652  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:45.121723  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:45.178070  359214 cri.go:89] found id: ""
	I1213 10:39:45.178087  359214 logs.go:282] 0 containers: []
	W1213 10:39:45.178095  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:45.178105  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:45.178117  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:45.242653  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:45.242715  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:45.312989  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:45.313030  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:45.333875  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:45.333893  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:45.402702  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:45.394811   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.395310   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.396989   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.397342   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.398810   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:45.394811   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.395310   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.396989   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.397342   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.398810   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:45.402713  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:45.402724  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:47.974092  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:47.984508  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:47.984581  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:48.011411  359214 cri.go:89] found id: ""
	I1213 10:39:48.011427  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.011434  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:48.011440  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:48.011500  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:48.037430  359214 cri.go:89] found id: ""
	I1213 10:39:48.037445  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.037464  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:48.037470  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:48.037541  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:48.068968  359214 cri.go:89] found id: ""
	I1213 10:39:48.068982  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.068989  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:48.068994  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:48.069053  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:48.093935  359214 cri.go:89] found id: ""
	I1213 10:39:48.093949  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.093966  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:48.093982  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:48.094054  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:48.118617  359214 cri.go:89] found id: ""
	I1213 10:39:48.118631  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.118647  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:48.118653  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:48.118742  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:48.147778  359214 cri.go:89] found id: ""
	I1213 10:39:48.147792  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.147802  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:48.147807  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:48.147866  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:48.171531  359214 cri.go:89] found id: ""
	I1213 10:39:48.171546  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.171553  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:48.171562  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:48.171572  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:48.228511  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:48.228531  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:48.244723  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:48.244738  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:48.313285  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:48.305099   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.305923   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.307541   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.308109   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.309223   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:48.305099   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.305923   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.307541   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.308109   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.309223   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:48.313296  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:48.313307  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:48.374383  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:48.374405  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:50.902721  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:50.912675  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:50.912735  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:50.936964  359214 cri.go:89] found id: ""
	I1213 10:39:50.936978  359214 logs.go:282] 0 containers: []
	W1213 10:39:50.936986  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:50.936991  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:50.937050  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:50.960978  359214 cri.go:89] found id: ""
	I1213 10:39:50.960991  359214 logs.go:282] 0 containers: []
	W1213 10:39:50.960999  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:50.961004  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:50.961060  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:50.985441  359214 cri.go:89] found id: ""
	I1213 10:39:50.985455  359214 logs.go:282] 0 containers: []
	W1213 10:39:50.985462  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:50.985467  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:50.985524  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:51.012305  359214 cri.go:89] found id: ""
	I1213 10:39:51.012320  359214 logs.go:282] 0 containers: []
	W1213 10:39:51.012327  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:51.012333  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:51.012394  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:51.037844  359214 cri.go:89] found id: ""
	I1213 10:39:51.037858  359214 logs.go:282] 0 containers: []
	W1213 10:39:51.037865  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:51.037871  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:51.037930  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:51.062094  359214 cri.go:89] found id: ""
	I1213 10:39:51.062108  359214 logs.go:282] 0 containers: []
	W1213 10:39:51.062115  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:51.062121  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:51.062178  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:51.087816  359214 cri.go:89] found id: ""
	I1213 10:39:51.087831  359214 logs.go:282] 0 containers: []
	W1213 10:39:51.087839  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:51.087848  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:51.087860  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:51.144441  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:51.144462  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:51.161532  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:51.161551  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:51.232639  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:51.224130   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.224905   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.226632   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.227272   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.228817   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:51.224130   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.224905   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.226632   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.227272   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.228817   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:51.232650  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:51.232662  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:51.300854  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:51.300877  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:53.830183  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:53.840765  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:53.840829  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:53.867482  359214 cri.go:89] found id: ""
	I1213 10:39:53.867497  359214 logs.go:282] 0 containers: []
	W1213 10:39:53.867504  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:53.867510  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:53.867572  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:53.896830  359214 cri.go:89] found id: ""
	I1213 10:39:53.896844  359214 logs.go:282] 0 containers: []
	W1213 10:39:53.896852  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:53.896857  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:53.896921  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:53.921163  359214 cri.go:89] found id: ""
	I1213 10:39:53.921177  359214 logs.go:282] 0 containers: []
	W1213 10:39:53.921185  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:53.921190  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:53.921247  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:53.947006  359214 cri.go:89] found id: ""
	I1213 10:39:53.947020  359214 logs.go:282] 0 containers: []
	W1213 10:39:53.947027  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:53.947033  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:53.947089  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:53.971965  359214 cri.go:89] found id: ""
	I1213 10:39:53.971979  359214 logs.go:282] 0 containers: []
	W1213 10:39:53.971986  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:53.971992  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:53.972050  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:53.996770  359214 cri.go:89] found id: ""
	I1213 10:39:53.996785  359214 logs.go:282] 0 containers: []
	W1213 10:39:53.996792  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:53.996797  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:53.996856  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:54.029511  359214 cri.go:89] found id: ""
	I1213 10:39:54.029526  359214 logs.go:282] 0 containers: []
	W1213 10:39:54.029534  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:54.029542  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:54.029553  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:54.063523  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:54.063540  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:54.120600  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:54.120624  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:54.136821  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:54.136839  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:54.210067  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:54.201894   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.202593   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.204122   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.204658   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.206168   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:54.201894   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.202593   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.204122   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.204658   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.206168   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:54.210077  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:54.210087  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:56.773483  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:56.783689  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:56.783766  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:56.808277  359214 cri.go:89] found id: ""
	I1213 10:39:56.808291  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.808299  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:56.808304  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:56.808368  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:56.832949  359214 cri.go:89] found id: ""
	I1213 10:39:56.832963  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.832970  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:56.832976  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:56.833036  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:56.858222  359214 cri.go:89] found id: ""
	I1213 10:39:56.858236  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.858250  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:56.858255  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:56.858313  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:56.886516  359214 cri.go:89] found id: ""
	I1213 10:39:56.886531  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.886538  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:56.886543  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:56.886599  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:56.916534  359214 cri.go:89] found id: ""
	I1213 10:39:56.916548  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.916554  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:56.916560  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:56.916620  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:56.941364  359214 cri.go:89] found id: ""
	I1213 10:39:56.941379  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.941391  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:56.941397  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:56.941458  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:56.965977  359214 cri.go:89] found id: ""
	I1213 10:39:56.965991  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.965998  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:56.966006  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:56.966017  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:57.022046  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:57.022066  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:57.038754  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:57.038773  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:57.104023  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:57.095403   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.096172   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.097756   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.098390   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.100006   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:57.095403   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.096172   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.097756   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.098390   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.100006   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:57.104033  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:57.104043  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:57.164889  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:57.164909  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:59.697427  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:59.709225  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:59.709293  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:59.736814  359214 cri.go:89] found id: ""
	I1213 10:39:59.736828  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.736835  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:59.736840  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:59.736897  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:59.765228  359214 cri.go:89] found id: ""
	I1213 10:39:59.765243  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.765250  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:59.765255  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:59.765321  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:59.790792  359214 cri.go:89] found id: ""
	I1213 10:39:59.790807  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.790814  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:59.790819  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:59.790877  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:59.817123  359214 cri.go:89] found id: ""
	I1213 10:39:59.817137  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.817149  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:59.817161  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:59.817225  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:59.842465  359214 cri.go:89] found id: ""
	I1213 10:39:59.842480  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.842488  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:59.842493  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:59.842557  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:59.871828  359214 cri.go:89] found id: ""
	I1213 10:39:59.871842  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.871859  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:59.871865  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:59.871921  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:59.895975  359214 cri.go:89] found id: ""
	I1213 10:39:59.895989  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.895996  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:59.896004  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:59.896014  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:59.953038  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:59.953058  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:59.970121  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:59.970140  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:00.112897  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:00.082810   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.086414   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.089161   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.089674   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.099187   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:00.082810   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.086414   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.089161   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.089674   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.099187   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:00.112910  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:00.112922  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:00.251770  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:00.251795  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:02.813529  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:02.825083  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:02.825143  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:02.849893  359214 cri.go:89] found id: ""
	I1213 10:40:02.849907  359214 logs.go:282] 0 containers: []
	W1213 10:40:02.849915  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:02.849920  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:02.849979  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:02.876288  359214 cri.go:89] found id: ""
	I1213 10:40:02.876303  359214 logs.go:282] 0 containers: []
	W1213 10:40:02.876311  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:02.876316  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:02.876376  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:02.900996  359214 cri.go:89] found id: ""
	I1213 10:40:02.901011  359214 logs.go:282] 0 containers: []
	W1213 10:40:02.901018  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:02.901023  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:02.901085  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:02.941121  359214 cri.go:89] found id: ""
	I1213 10:40:02.941135  359214 logs.go:282] 0 containers: []
	W1213 10:40:02.941142  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:02.941148  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:02.941212  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:02.977122  359214 cri.go:89] found id: ""
	I1213 10:40:02.977137  359214 logs.go:282] 0 containers: []
	W1213 10:40:02.977145  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:02.977151  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:02.977211  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:03.007614  359214 cri.go:89] found id: ""
	I1213 10:40:03.007631  359214 logs.go:282] 0 containers: []
	W1213 10:40:03.007638  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:03.007644  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:03.007712  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:03.035112  359214 cri.go:89] found id: ""
	I1213 10:40:03.035128  359214 logs.go:282] 0 containers: []
	W1213 10:40:03.035135  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:03.035143  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:03.035153  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:03.092346  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:03.092365  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:03.109513  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:03.109531  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:03.178080  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:03.169681   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.170216   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.171843   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.172389   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.174013   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:03.169681   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.170216   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.171843   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.172389   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.174013   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:03.178092  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:03.178103  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:03.240824  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:03.240843  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:05.775438  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:05.785647  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:05.785707  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:05.809484  359214 cri.go:89] found id: ""
	I1213 10:40:05.809497  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.809505  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:05.809510  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:05.809569  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:05.834754  359214 cri.go:89] found id: ""
	I1213 10:40:05.834769  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.834777  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:05.834782  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:05.834844  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:05.858984  359214 cri.go:89] found id: ""
	I1213 10:40:05.858999  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.859006  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:05.859011  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:05.859072  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:05.884414  359214 cri.go:89] found id: ""
	I1213 10:40:05.884429  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.884436  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:05.884442  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:05.884504  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:05.918776  359214 cri.go:89] found id: ""
	I1213 10:40:05.918799  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.918807  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:05.918812  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:05.918880  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:05.963307  359214 cri.go:89] found id: ""
	I1213 10:40:05.963331  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.963340  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:05.963346  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:05.963414  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:05.989236  359214 cri.go:89] found id: ""
	I1213 10:40:05.989252  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.989260  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:05.989274  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:05.989284  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:06.046789  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:06.046809  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:06.063391  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:06.063408  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:06.133569  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:06.125185   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.125769   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.127395   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.128000   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.129671   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:06.125185   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.125769   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.127395   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.128000   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.129671   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:06.133579  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:06.133590  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:06.199358  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:06.199385  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:08.731038  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:08.741608  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:08.741668  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:08.770775  359214 cri.go:89] found id: ""
	I1213 10:40:08.770798  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.770806  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:08.770812  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:08.770880  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:08.795812  359214 cri.go:89] found id: ""
	I1213 10:40:08.795826  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.795834  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:08.795839  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:08.795900  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:08.821389  359214 cri.go:89] found id: ""
	I1213 10:40:08.821405  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.821415  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:08.821420  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:08.821484  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:08.847242  359214 cri.go:89] found id: ""
	I1213 10:40:08.847256  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.847265  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:08.847271  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:08.847337  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:08.873913  359214 cri.go:89] found id: ""
	I1213 10:40:08.873927  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.873935  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:08.873940  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:08.874003  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:08.898969  359214 cri.go:89] found id: ""
	I1213 10:40:08.898983  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.898990  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:08.898997  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:08.899063  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:08.936984  359214 cri.go:89] found id: ""
	I1213 10:40:08.936999  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.937006  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:08.937015  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:08.937026  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:09.003459  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:09.003483  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:09.022648  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:09.022673  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:09.089911  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:09.081728   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.082500   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.083990   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.084516   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.086022   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:09.081728   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.082500   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.083990   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.084516   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.086022   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:09.089922  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:09.089934  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:09.152235  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:09.152255  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:11.681167  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:11.691399  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:11.691463  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:11.720896  359214 cri.go:89] found id: ""
	I1213 10:40:11.720910  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.720918  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:11.720924  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:11.720987  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:11.746089  359214 cri.go:89] found id: ""
	I1213 10:40:11.746103  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.746111  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:11.746117  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:11.746176  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:11.770642  359214 cri.go:89] found id: ""
	I1213 10:40:11.770657  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.770664  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:11.770670  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:11.770759  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:11.798877  359214 cri.go:89] found id: ""
	I1213 10:40:11.798891  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.798900  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:11.798905  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:11.798965  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:11.824512  359214 cri.go:89] found id: ""
	I1213 10:40:11.824526  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.824534  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:11.824539  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:11.824596  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:11.849644  359214 cri.go:89] found id: ""
	I1213 10:40:11.849658  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.849665  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:11.849671  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:11.849728  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:11.878171  359214 cri.go:89] found id: ""
	I1213 10:40:11.878185  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.878192  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:11.878201  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:11.878213  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:11.942012  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:11.942033  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:11.973830  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:11.973849  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:12.038115  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:12.038135  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:12.055328  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:12.055345  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:12.122312  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:12.113825   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.114885   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.116494   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.116834   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.118378   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:12.113825   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.114885   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.116494   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.116834   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.118378   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:14.622545  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:14.632872  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:14.632931  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:14.660285  359214 cri.go:89] found id: ""
	I1213 10:40:14.660300  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.660308  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:14.660313  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:14.660370  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:14.686341  359214 cri.go:89] found id: ""
	I1213 10:40:14.686355  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.686362  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:14.686368  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:14.686427  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:14.710306  359214 cri.go:89] found id: ""
	I1213 10:40:14.710321  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.710328  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:14.710334  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:14.710392  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:14.736823  359214 cri.go:89] found id: ""
	I1213 10:40:14.736838  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.736846  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:14.736851  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:14.736909  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:14.761623  359214 cri.go:89] found id: ""
	I1213 10:40:14.761638  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.761645  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:14.761651  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:14.761710  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:14.786707  359214 cri.go:89] found id: ""
	I1213 10:40:14.786721  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.786729  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:14.786734  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:14.786795  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:14.816346  359214 cri.go:89] found id: ""
	I1213 10:40:14.816361  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.816368  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:14.816376  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:14.816386  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:14.877767  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:14.877786  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:14.914260  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:14.914277  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:14.980282  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:14.980303  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:14.996741  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:14.996760  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:15.099242  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:15.090567   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.091275   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.092910   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.093493   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.095098   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:15.090567   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.091275   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.092910   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.093493   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.095098   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:17.600882  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:17.611377  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:17.611437  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:17.639825  359214 cri.go:89] found id: ""
	I1213 10:40:17.639840  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.639847  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:17.639853  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:17.639912  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:17.664963  359214 cri.go:89] found id: ""
	I1213 10:40:17.664977  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.664985  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:17.664990  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:17.665052  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:17.690137  359214 cri.go:89] found id: ""
	I1213 10:40:17.690152  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.690159  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:17.690165  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:17.690230  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:17.715292  359214 cri.go:89] found id: ""
	I1213 10:40:17.715307  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.715315  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:17.715320  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:17.715382  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:17.744729  359214 cri.go:89] found id: ""
	I1213 10:40:17.744743  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.744750  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:17.744756  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:17.744815  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:17.772253  359214 cri.go:89] found id: ""
	I1213 10:40:17.772268  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.772276  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:17.772282  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:17.772348  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:17.797214  359214 cri.go:89] found id: ""
	I1213 10:40:17.797229  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.797237  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:17.797245  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:17.797255  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:17.852633  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:17.852653  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:17.869612  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:17.869633  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:17.936787  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:17.927568   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.928465   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.930186   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.930475   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.932615   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:17.927568   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.928465   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.930186   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.930475   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.932615   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:17.936804  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:17.936815  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:18.005630  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:18.005656  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:20.537348  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:20.547703  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:20.547778  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:20.572977  359214 cri.go:89] found id: ""
	I1213 10:40:20.572991  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.572998  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:20.573004  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:20.573062  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:20.602314  359214 cri.go:89] found id: ""
	I1213 10:40:20.602328  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.602335  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:20.602341  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:20.602397  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:20.627655  359214 cri.go:89] found id: ""
	I1213 10:40:20.627669  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.627686  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:20.627698  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:20.627767  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:20.655199  359214 cri.go:89] found id: ""
	I1213 10:40:20.655213  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.655220  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:20.655226  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:20.655291  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:20.682083  359214 cri.go:89] found id: ""
	I1213 10:40:20.682107  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.682115  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:20.682120  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:20.682189  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:20.707128  359214 cri.go:89] found id: ""
	I1213 10:40:20.707142  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.707150  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:20.707155  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:20.707213  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:20.732071  359214 cri.go:89] found id: ""
	I1213 10:40:20.732087  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.732094  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:20.732103  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:20.732112  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:20.797387  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:20.788274   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.789028   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.791053   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.791612   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.793250   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:20.788274   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.789028   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.791053   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.791612   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.793250   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:20.797397  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:20.797410  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:20.859451  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:20.859471  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:20.892801  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:20.892820  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:20.958351  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:20.958371  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:23.480839  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:23.491926  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:23.491987  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:23.518294  359214 cri.go:89] found id: ""
	I1213 10:40:23.518309  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.518317  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:23.518324  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:23.518385  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:23.545487  359214 cri.go:89] found id: ""
	I1213 10:40:23.545502  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.545509  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:23.545514  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:23.545584  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:23.571990  359214 cri.go:89] found id: ""
	I1213 10:40:23.572004  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.572012  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:23.572017  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:23.572080  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:23.599133  359214 cri.go:89] found id: ""
	I1213 10:40:23.599149  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.599157  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:23.599163  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:23.599223  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:23.626203  359214 cri.go:89] found id: ""
	I1213 10:40:23.626217  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.626225  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:23.626232  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:23.626296  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:23.653325  359214 cri.go:89] found id: ""
	I1213 10:40:23.653341  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.653349  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:23.653354  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:23.653423  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:23.688100  359214 cri.go:89] found id: ""
	I1213 10:40:23.688115  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.688123  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:23.688132  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:23.688141  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:23.750798  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:23.750818  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:23.781668  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:23.781685  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:23.839211  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:23.839231  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:23.856390  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:23.856414  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:23.924021  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:23.914017   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.914911   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.915850   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.917610   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.918368   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:23.914017   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.914911   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.915850   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.917610   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.918368   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:26.424278  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:26.434304  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:26.434366  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:26.460634  359214 cri.go:89] found id: ""
	I1213 10:40:26.460649  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.460657  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:26.460663  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:26.460723  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:26.485153  359214 cri.go:89] found id: ""
	I1213 10:40:26.485167  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.485175  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:26.485180  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:26.485238  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:26.514602  359214 cri.go:89] found id: ""
	I1213 10:40:26.514617  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.514624  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:26.514630  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:26.514715  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:26.539399  359214 cri.go:89] found id: ""
	I1213 10:40:26.539415  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.539422  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:26.539427  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:26.539489  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:26.564066  359214 cri.go:89] found id: ""
	I1213 10:40:26.564081  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.564088  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:26.564094  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:26.564158  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:26.595722  359214 cri.go:89] found id: ""
	I1213 10:40:26.595736  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.595744  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:26.595749  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:26.595808  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:26.621852  359214 cri.go:89] found id: ""
	I1213 10:40:26.621867  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.621875  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:26.621884  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:26.621894  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:26.678226  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:26.678245  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:26.694679  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:26.694762  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:26.760593  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:26.751702   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.752418   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.754240   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.754904   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.756624   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:26.751702   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.752418   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.754240   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.754904   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.756624   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:26.760604  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:26.760615  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:26.826139  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:26.826161  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:29.354247  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:29.364778  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:29.364838  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:29.391976  359214 cri.go:89] found id: ""
	I1213 10:40:29.391992  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.391999  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:29.392006  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:29.392065  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:29.420898  359214 cri.go:89] found id: ""
	I1213 10:40:29.420913  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.420920  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:29.420926  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:29.420995  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:29.445579  359214 cri.go:89] found id: ""
	I1213 10:40:29.445593  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.445601  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:29.445606  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:29.445669  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:29.470481  359214 cri.go:89] found id: ""
	I1213 10:40:29.470496  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.470504  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:29.470510  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:29.470571  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:29.494582  359214 cri.go:89] found id: ""
	I1213 10:40:29.494597  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.494605  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:29.494612  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:29.494672  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:29.520784  359214 cri.go:89] found id: ""
	I1213 10:40:29.520801  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.520810  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:29.520816  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:29.520879  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:29.546369  359214 cri.go:89] found id: ""
	I1213 10:40:29.546383  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.546390  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:29.546398  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:29.546410  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:29.607363  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:29.607383  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:29.641550  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:29.641568  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:29.700639  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:29.700662  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:29.717135  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:29.717152  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:29.786035  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:29.777828   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.778659   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.780297   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.780629   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.782173   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:29.777828   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.778659   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.780297   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.780629   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.782173   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:32.286874  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:32.297433  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:32.297493  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:32.326086  359214 cri.go:89] found id: ""
	I1213 10:40:32.326102  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.326109  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:32.326116  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:32.326172  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:32.359076  359214 cri.go:89] found id: ""
	I1213 10:40:32.359091  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.359098  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:32.359104  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:32.359170  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:32.384522  359214 cri.go:89] found id: ""
	I1213 10:40:32.384536  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.384544  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:32.384560  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:32.384659  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:32.410250  359214 cri.go:89] found id: ""
	I1213 10:40:32.410264  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.410272  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:32.410285  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:32.410348  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:32.435630  359214 cri.go:89] found id: ""
	I1213 10:40:32.435644  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.435651  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:32.435656  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:32.435714  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:32.463149  359214 cri.go:89] found id: ""
	I1213 10:40:32.463163  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.463171  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:32.463176  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:32.463242  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:32.487678  359214 cri.go:89] found id: ""
	I1213 10:40:32.487692  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.487700  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:32.487707  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:32.487716  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:32.550022  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:32.550044  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:32.583548  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:32.583564  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:32.640719  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:32.640741  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:32.658578  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:32.658596  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:32.723797  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:32.714586   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.715311   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.716834   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.717289   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.719662   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:32.714586   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.715311   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.716834   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.717289   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.719662   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:35.224914  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:35.236872  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:35.237012  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:35.268051  359214 cri.go:89] found id: ""
	I1213 10:40:35.268066  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.268073  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:35.268080  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:35.268145  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:35.295044  359214 cri.go:89] found id: ""
	I1213 10:40:35.295059  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.295068  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:35.295075  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:35.295135  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:35.325621  359214 cri.go:89] found id: ""
	I1213 10:40:35.325634  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.325642  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:35.325647  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:35.325710  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:35.351145  359214 cri.go:89] found id: ""
	I1213 10:40:35.351160  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.351168  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:35.351173  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:35.351232  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:35.376062  359214 cri.go:89] found id: ""
	I1213 10:40:35.376076  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.376083  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:35.376089  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:35.376145  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:35.400598  359214 cri.go:89] found id: ""
	I1213 10:40:35.400612  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.400619  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:35.400631  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:35.400688  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:35.425347  359214 cri.go:89] found id: ""
	I1213 10:40:35.425361  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.425368  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:35.425376  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:35.425387  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:35.487139  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:35.487160  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:35.514527  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:35.514544  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:35.571469  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:35.571489  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:35.590017  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:35.590034  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:35.658284  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:35.648682   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.650020   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.650936   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.652639   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.653357   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:35.648682   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.650020   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.650936   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.652639   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.653357   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:38.158809  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:38.173580  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:38.173664  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:38.205099  359214 cri.go:89] found id: ""
	I1213 10:40:38.205115  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.205122  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:38.205128  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:38.205185  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:38.230418  359214 cri.go:89] found id: ""
	I1213 10:40:38.230432  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.230439  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:38.230445  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:38.230503  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:38.255657  359214 cri.go:89] found id: ""
	I1213 10:40:38.255671  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.255679  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:38.255684  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:38.255743  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:38.284257  359214 cri.go:89] found id: ""
	I1213 10:40:38.284271  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.284279  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:38.284285  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:38.284343  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:38.310187  359214 cri.go:89] found id: ""
	I1213 10:40:38.310202  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.310209  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:38.310214  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:38.310272  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:38.334855  359214 cri.go:89] found id: ""
	I1213 10:40:38.334870  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.334878  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:38.334883  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:38.334943  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:38.364073  359214 cri.go:89] found id: ""
	I1213 10:40:38.364087  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.364095  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:38.364103  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:38.364114  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:38.380615  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:38.380633  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:38.445151  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:38.436629   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.437359   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.439007   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.439526   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.441205   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:38.436629   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.437359   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.439007   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.439526   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.441205   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:38.445161  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:38.445171  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:38.508000  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:38.508024  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:38.536010  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:38.536028  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:41.097145  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:41.107492  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:41.107560  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:41.133151  359214 cri.go:89] found id: ""
	I1213 10:40:41.133165  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.133173  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:41.133178  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:41.133239  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:41.158807  359214 cri.go:89] found id: ""
	I1213 10:40:41.158822  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.158830  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:41.158835  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:41.158900  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:41.186344  359214 cri.go:89] found id: ""
	I1213 10:40:41.186358  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.186366  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:41.186371  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:41.186432  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:41.212889  359214 cri.go:89] found id: ""
	I1213 10:40:41.212904  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.212911  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:41.212917  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:41.212976  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:41.238414  359214 cri.go:89] found id: ""
	I1213 10:40:41.238429  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.238437  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:41.238442  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:41.238509  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:41.265200  359214 cri.go:89] found id: ""
	I1213 10:40:41.265215  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.265222  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:41.265228  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:41.265299  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:41.293447  359214 cri.go:89] found id: ""
	I1213 10:40:41.293465  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.293473  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:41.293483  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:41.293539  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:41.357277  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:41.348095   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.348933   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.350453   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.350904   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.352722   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:41.348095   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.348933   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.350453   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.350904   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.352722   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:41.357289  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:41.357299  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:41.419746  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:41.419767  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:41.447382  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:41.447400  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:41.502410  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:41.502430  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:44.019462  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:44.030131  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:44.030195  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:44.063076  359214 cri.go:89] found id: ""
	I1213 10:40:44.063093  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.063102  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:44.063107  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:44.063171  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:44.087990  359214 cri.go:89] found id: ""
	I1213 10:40:44.088005  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.088012  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:44.088017  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:44.088077  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:44.116967  359214 cri.go:89] found id: ""
	I1213 10:40:44.116982  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.117000  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:44.117006  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:44.117075  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:44.144381  359214 cri.go:89] found id: ""
	I1213 10:40:44.144395  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.144403  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:44.144414  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:44.144475  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:44.176265  359214 cri.go:89] found id: ""
	I1213 10:40:44.176279  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.176286  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:44.176291  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:44.176349  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:44.204075  359214 cri.go:89] found id: ""
	I1213 10:40:44.204090  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.204097  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:44.204102  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:44.204159  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:44.235147  359214 cri.go:89] found id: ""
	I1213 10:40:44.235161  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.235169  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:44.235177  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:44.235187  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:44.290923  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:44.290942  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:44.307381  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:44.307398  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:44.371069  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:44.362628   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.363314   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.365045   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.365643   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.367260   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:44.362628   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.363314   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.365045   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.365643   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.367260   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:44.371080  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:44.371092  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:44.432736  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:44.432757  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:46.966048  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:46.976554  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:46.976616  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:47.009823  359214 cri.go:89] found id: ""
	I1213 10:40:47.009837  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.009845  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:47.009850  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:47.009912  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:47.035213  359214 cri.go:89] found id: ""
	I1213 10:40:47.035227  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.035234  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:47.035239  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:47.035300  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:47.060442  359214 cri.go:89] found id: ""
	I1213 10:40:47.060457  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.060465  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:47.060470  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:47.060527  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:47.084361  359214 cri.go:89] found id: ""
	I1213 10:40:47.084375  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.084383  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:47.084389  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:47.084453  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:47.109828  359214 cri.go:89] found id: ""
	I1213 10:40:47.109843  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.109850  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:47.109856  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:47.109920  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:47.138538  359214 cri.go:89] found id: ""
	I1213 10:40:47.138553  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.138561  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:47.138566  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:47.138623  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:47.173086  359214 cri.go:89] found id: ""
	I1213 10:40:47.173101  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.173108  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:47.173116  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:47.173125  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:47.230267  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:47.230285  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:47.247567  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:47.247584  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:47.313118  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:47.305055   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.305868   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.307513   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.307952   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.309445   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:47.305055   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.305868   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.307513   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.307952   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.309445   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:47.313128  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:47.313140  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:47.379486  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:47.379507  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:49.911610  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:49.921678  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:49.921738  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:49.945802  359214 cri.go:89] found id: ""
	I1213 10:40:49.945815  359214 logs.go:282] 0 containers: []
	W1213 10:40:49.945823  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:49.945828  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:49.945884  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:49.972021  359214 cri.go:89] found id: ""
	I1213 10:40:49.972036  359214 logs.go:282] 0 containers: []
	W1213 10:40:49.972043  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:49.972048  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:49.972104  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:49.995832  359214 cri.go:89] found id: ""
	I1213 10:40:49.995847  359214 logs.go:282] 0 containers: []
	W1213 10:40:49.995854  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:49.995859  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:49.995917  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:50.025400  359214 cri.go:89] found id: ""
	I1213 10:40:50.025416  359214 logs.go:282] 0 containers: []
	W1213 10:40:50.025424  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:50.025430  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:50.025488  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:50.052197  359214 cri.go:89] found id: ""
	I1213 10:40:50.052213  359214 logs.go:282] 0 containers: []
	W1213 10:40:50.052222  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:50.052229  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:50.052290  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:50.079760  359214 cri.go:89] found id: ""
	I1213 10:40:50.079774  359214 logs.go:282] 0 containers: []
	W1213 10:40:50.079782  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:50.079788  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:50.079849  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:50.109349  359214 cri.go:89] found id: ""
	I1213 10:40:50.109364  359214 logs.go:282] 0 containers: []
	W1213 10:40:50.109372  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:50.109380  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:50.109390  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:50.165908  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:50.165929  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:50.184199  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:50.184216  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:50.252767  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:50.244722   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.245526   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.247105   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.247464   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.248997   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:50.244722   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.245526   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.247105   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.247464   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.248997   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:50.252777  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:50.252790  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:50.314222  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:50.314241  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:52.842532  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:52.853108  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:52.853184  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:52.880391  359214 cri.go:89] found id: ""
	I1213 10:40:52.880412  359214 logs.go:282] 0 containers: []
	W1213 10:40:52.880420  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:52.880426  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:52.880487  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:52.905175  359214 cri.go:89] found id: ""
	I1213 10:40:52.905189  359214 logs.go:282] 0 containers: []
	W1213 10:40:52.905197  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:52.905202  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:52.905279  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:52.934872  359214 cri.go:89] found id: ""
	I1213 10:40:52.934887  359214 logs.go:282] 0 containers: []
	W1213 10:40:52.934894  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:52.934900  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:52.934956  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:52.960307  359214 cri.go:89] found id: ""
	I1213 10:40:52.960321  359214 logs.go:282] 0 containers: []
	W1213 10:40:52.960329  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:52.960334  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:52.960390  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:52.985363  359214 cri.go:89] found id: ""
	I1213 10:40:52.985377  359214 logs.go:282] 0 containers: []
	W1213 10:40:52.985385  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:52.985390  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:52.985453  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:53.011565  359214 cri.go:89] found id: ""
	I1213 10:40:53.011581  359214 logs.go:282] 0 containers: []
	W1213 10:40:53.011589  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:53.011594  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:53.011657  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:53.036397  359214 cri.go:89] found id: ""
	I1213 10:40:53.036412  359214 logs.go:282] 0 containers: []
	W1213 10:40:53.036420  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:53.036428  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:53.036438  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:53.091583  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:53.091603  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:53.107990  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:53.108007  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:53.173876  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:53.164848   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.165601   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.167336   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.167976   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.169634   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:53.164848   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.165601   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.167336   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.167976   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.169634   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:53.173886  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:53.173897  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:53.238989  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:53.239009  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:55.773075  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:55.783512  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:55.783574  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:55.807988  359214 cri.go:89] found id: ""
	I1213 10:40:55.808002  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.808009  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:55.808014  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:55.808073  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:55.831609  359214 cri.go:89] found id: ""
	I1213 10:40:55.831624  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.831632  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:55.831637  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:55.831696  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:55.856162  359214 cri.go:89] found id: ""
	I1213 10:40:55.856177  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.856184  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:55.856190  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:55.856247  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:55.883604  359214 cri.go:89] found id: ""
	I1213 10:40:55.883619  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.883626  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:55.883631  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:55.883695  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:55.907679  359214 cri.go:89] found id: ""
	I1213 10:40:55.907694  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.907701  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:55.907706  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:55.907764  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:55.932970  359214 cri.go:89] found id: ""
	I1213 10:40:55.932984  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.932991  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:55.932996  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:55.933057  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:55.956837  359214 cri.go:89] found id: ""
	I1213 10:40:55.956851  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.956858  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:55.956866  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:55.956877  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:56.030354  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:56.021163   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.021989   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.023979   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.024615   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.026271   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:56.021163   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.021989   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.023979   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.024615   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.026271   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:56.030364  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:56.030376  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:56.092205  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:56.092226  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:56.119616  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:56.119633  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:56.177084  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:56.177103  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:58.695794  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:58.706025  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:58.706086  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:58.729634  359214 cri.go:89] found id: ""
	I1213 10:40:58.729647  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.729654  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:58.729659  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:58.729718  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:58.753786  359214 cri.go:89] found id: ""
	I1213 10:40:58.753800  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.753808  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:58.753813  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:58.753874  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:58.778478  359214 cri.go:89] found id: ""
	I1213 10:40:58.778491  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.778498  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:58.778503  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:58.778560  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:58.803243  359214 cri.go:89] found id: ""
	I1213 10:40:58.803258  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.803274  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:58.803280  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:58.803342  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:58.827435  359214 cri.go:89] found id: ""
	I1213 10:40:58.827449  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.827457  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:58.827462  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:58.827526  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:58.852612  359214 cri.go:89] found id: ""
	I1213 10:40:58.852627  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.852635  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:58.852640  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:58.852702  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:58.879181  359214 cri.go:89] found id: ""
	I1213 10:40:58.879195  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.879202  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:58.879210  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:58.879224  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:58.940146  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:58.940166  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:58.969086  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:58.969104  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:59.027812  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:59.027832  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:59.044161  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:59.044180  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:59.107958  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:59.099940   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.100731   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.102281   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.102588   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.104070   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:59.099940   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.100731   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.102281   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.102588   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.104070   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:01.608222  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:01.619072  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:01.619137  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:01.644559  359214 cri.go:89] found id: ""
	I1213 10:41:01.644574  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.644582  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:01.644587  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:01.644690  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:01.673686  359214 cri.go:89] found id: ""
	I1213 10:41:01.673701  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.673709  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:01.673714  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:01.673776  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:01.700231  359214 cri.go:89] found id: ""
	I1213 10:41:01.700246  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.700253  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:01.700259  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:01.700317  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:01.729867  359214 cri.go:89] found id: ""
	I1213 10:41:01.729883  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.729890  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:01.729895  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:01.729954  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:01.754275  359214 cri.go:89] found id: ""
	I1213 10:41:01.754289  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.754297  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:01.754302  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:01.754362  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:01.780449  359214 cri.go:89] found id: ""
	I1213 10:41:01.780464  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.780472  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:01.780477  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:01.780533  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:01.806614  359214 cri.go:89] found id: ""
	I1213 10:41:01.806638  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.806646  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:01.806654  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:01.806666  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:01.872660  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:01.872681  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:01.908081  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:01.908099  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:01.965082  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:01.965103  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:01.982015  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:01.982033  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:02.054794  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:02.045518   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.046349   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.047002   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.048599   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.049133   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:02.045518   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.046349   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.047002   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.048599   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.049133   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:04.555147  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:04.565791  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:04.565856  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:04.591956  359214 cri.go:89] found id: ""
	I1213 10:41:04.591971  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.591978  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:04.591984  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:04.592045  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:04.615698  359214 cri.go:89] found id: ""
	I1213 10:41:04.615713  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.615720  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:04.615725  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:04.615786  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:04.640509  359214 cri.go:89] found id: ""
	I1213 10:41:04.640523  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.640531  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:04.640538  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:04.640596  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:04.665547  359214 cri.go:89] found id: ""
	I1213 10:41:04.665562  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.665569  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:04.665577  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:04.665637  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:04.690947  359214 cri.go:89] found id: ""
	I1213 10:41:04.690961  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.690969  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:04.690974  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:04.691037  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:04.720397  359214 cri.go:89] found id: ""
	I1213 10:41:04.720421  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.720429  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:04.720435  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:04.720492  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:04.750207  359214 cri.go:89] found id: ""
	I1213 10:41:04.750233  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.750241  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:04.750250  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:04.750261  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:04.814350  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:04.806033   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.806630   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.808181   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.808726   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.810316   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:04.806033   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.806630   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.808181   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.808726   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.810316   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:04.814360  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:04.814381  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:04.876775  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:04.876798  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:04.904820  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:04.904836  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:04.962939  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:04.962958  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:07.479750  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:07.489681  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:07.489740  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:07.516670  359214 cri.go:89] found id: ""
	I1213 10:41:07.516684  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.516691  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:07.516697  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:07.516754  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:07.541873  359214 cri.go:89] found id: ""
	I1213 10:41:07.541888  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.541895  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:07.541900  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:07.541958  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:07.567390  359214 cri.go:89] found id: ""
	I1213 10:41:07.567404  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.567411  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:07.567416  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:07.567476  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:07.595533  359214 cri.go:89] found id: ""
	I1213 10:41:07.595546  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.595553  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:07.595559  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:07.595624  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:07.619449  359214 cri.go:89] found id: ""
	I1213 10:41:07.619463  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.619470  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:07.619476  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:07.619535  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:07.646270  359214 cri.go:89] found id: ""
	I1213 10:41:07.646284  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.646291  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:07.646297  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:07.646356  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:07.671609  359214 cri.go:89] found id: ""
	I1213 10:41:07.671623  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.671630  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:07.671638  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:07.671648  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:07.726992  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:07.727010  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:07.743360  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:07.743377  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:07.805371  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:07.797570   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.797988   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.799538   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.799877   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.801379   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:07.797570   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.797988   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.799538   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.799877   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.801379   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:07.805381  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:07.805393  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:07.867093  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:07.867115  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:10.399083  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:10.409097  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:10.409158  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:10.444135  359214 cri.go:89] found id: ""
	I1213 10:41:10.444149  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.444157  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:10.444162  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:10.444224  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:10.476756  359214 cri.go:89] found id: ""
	I1213 10:41:10.476771  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.476778  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:10.476784  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:10.476842  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:10.501876  359214 cri.go:89] found id: ""
	I1213 10:41:10.501890  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.501898  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:10.501903  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:10.501962  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:10.526921  359214 cri.go:89] found id: ""
	I1213 10:41:10.526936  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.526943  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:10.526949  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:10.527008  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:10.560474  359214 cri.go:89] found id: ""
	I1213 10:41:10.560489  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.560496  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:10.560501  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:10.560560  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:10.589176  359214 cri.go:89] found id: ""
	I1213 10:41:10.589190  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.589209  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:10.589215  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:10.589301  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:10.614119  359214 cri.go:89] found id: ""
	I1213 10:41:10.614139  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.614146  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:10.614155  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:10.614165  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:10.669835  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:10.669856  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:10.687547  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:10.687564  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:10.753151  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:10.744373   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.744993   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.747393   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.747860   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.749416   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:10.744373   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.744993   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.747393   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.747860   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.749416   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:10.753161  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:10.753175  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:10.825142  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:10.825173  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:13.352978  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:13.363579  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:13.363649  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:13.392544  359214 cri.go:89] found id: ""
	I1213 10:41:13.392558  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.392565  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:13.392571  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:13.392668  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:13.431393  359214 cri.go:89] found id: ""
	I1213 10:41:13.431407  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.431424  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:13.431430  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:13.431498  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:13.467012  359214 cri.go:89] found id: ""
	I1213 10:41:13.467027  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.467034  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:13.467040  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:13.467114  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:13.495958  359214 cri.go:89] found id: ""
	I1213 10:41:13.495972  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.495990  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:13.495996  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:13.496061  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:13.521376  359214 cri.go:89] found id: ""
	I1213 10:41:13.521399  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.521408  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:13.521413  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:13.521480  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:13.548831  359214 cri.go:89] found id: ""
	I1213 10:41:13.548845  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.548852  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:13.548858  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:13.548920  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:13.574611  359214 cri.go:89] found id: ""
	I1213 10:41:13.574626  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.574633  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:13.574661  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:13.574673  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:13.631156  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:13.631175  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:13.647668  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:13.647685  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:13.712729  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:13.703922   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.704556   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.706153   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.706621   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.708279   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:13.703922   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.704556   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.706153   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.706621   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.708279   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:13.712740  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:13.712752  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:13.776779  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:13.776799  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:16.310332  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:16.320699  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:16.320761  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:16.344441  359214 cri.go:89] found id: ""
	I1213 10:41:16.344455  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.344462  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:16.344468  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:16.344529  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:16.372703  359214 cri.go:89] found id: ""
	I1213 10:41:16.372717  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.372725  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:16.372730  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:16.372789  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:16.397701  359214 cri.go:89] found id: ""
	I1213 10:41:16.397715  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.397723  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:16.397728  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:16.397785  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:16.436711  359214 cri.go:89] found id: ""
	I1213 10:41:16.436726  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.436733  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:16.436739  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:16.436795  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:16.471220  359214 cri.go:89] found id: ""
	I1213 10:41:16.471235  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.471243  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:16.471248  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:16.471306  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:16.498773  359214 cri.go:89] found id: ""
	I1213 10:41:16.498788  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.498796  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:16.498801  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:16.498861  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:16.523734  359214 cri.go:89] found id: ""
	I1213 10:41:16.523749  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.523756  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:16.523764  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:16.523775  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:16.554346  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:16.554364  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:16.610645  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:16.610665  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:16.626953  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:16.626970  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:16.691344  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:16.682639   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.683311   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.685086   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.685793   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.687420   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:16.682639   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.683311   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.685086   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.685793   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.687420   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:16.691354  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:16.691367  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:19.255129  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:19.265879  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:19.265940  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:19.291837  359214 cri.go:89] found id: ""
	I1213 10:41:19.291851  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.291859  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:19.291864  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:19.291923  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:19.315964  359214 cri.go:89] found id: ""
	I1213 10:41:19.315978  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.315985  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:19.315990  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:19.316046  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:19.343352  359214 cri.go:89] found id: ""
	I1213 10:41:19.343366  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.343373  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:19.343378  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:19.343434  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:19.367745  359214 cri.go:89] found id: ""
	I1213 10:41:19.367760  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.367767  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:19.367773  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:19.367830  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:19.391416  359214 cri.go:89] found id: ""
	I1213 10:41:19.391429  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.391437  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:19.391442  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:19.391503  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:19.420969  359214 cri.go:89] found id: ""
	I1213 10:41:19.420982  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.420989  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:19.420995  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:19.421051  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:19.459512  359214 cri.go:89] found id: ""
	I1213 10:41:19.459528  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.459536  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:19.459544  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:19.459555  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:19.490208  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:19.490224  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:19.546240  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:19.546261  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:19.562645  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:19.562664  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:19.625588  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:19.617541   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.617927   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.619446   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.619795   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.621463   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:19.617541   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.617927   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.619446   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.619795   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.621463   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:19.625599  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:19.625610  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:22.187966  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:22.198583  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:22.198650  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:22.223213  359214 cri.go:89] found id: ""
	I1213 10:41:22.223227  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.223240  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:22.223246  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:22.223303  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:22.248552  359214 cri.go:89] found id: ""
	I1213 10:41:22.248567  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.248574  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:22.248579  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:22.248641  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:22.273682  359214 cri.go:89] found id: ""
	I1213 10:41:22.273697  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.273714  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:22.273720  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:22.273802  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:22.299868  359214 cri.go:89] found id: ""
	I1213 10:41:22.299883  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.299891  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:22.299896  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:22.299962  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:22.325309  359214 cri.go:89] found id: ""
	I1213 10:41:22.325324  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.325331  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:22.325337  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:22.325399  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:22.354179  359214 cri.go:89] found id: ""
	I1213 10:41:22.354193  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.354200  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:22.354205  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:22.354261  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:22.378958  359214 cri.go:89] found id: ""
	I1213 10:41:22.378980  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.378987  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:22.378997  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:22.379007  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:22.440927  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:22.440949  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:22.460102  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:22.460120  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:22.529575  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:22.521290   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.521799   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.523477   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.524007   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.525558   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:22.521290   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.521799   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.523477   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.524007   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.525558   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:22.529585  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:22.529595  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:22.592904  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:22.592925  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:25.122090  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:25.132657  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:25.132721  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:25.159021  359214 cri.go:89] found id: ""
	I1213 10:41:25.159036  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.159044  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:25.159049  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:25.159111  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:25.185666  359214 cri.go:89] found id: ""
	I1213 10:41:25.185691  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.185700  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:25.185706  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:25.185787  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:25.211201  359214 cri.go:89] found id: ""
	I1213 10:41:25.211216  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.211223  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:25.211228  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:25.211288  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:25.241164  359214 cri.go:89] found id: ""
	I1213 10:41:25.241178  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.241185  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:25.241191  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:25.241259  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:25.266721  359214 cri.go:89] found id: ""
	I1213 10:41:25.266737  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.266745  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:25.266751  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:25.266815  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:25.292241  359214 cri.go:89] found id: ""
	I1213 10:41:25.292255  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.292263  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:25.292272  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:25.292332  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:25.317411  359214 cri.go:89] found id: ""
	I1213 10:41:25.317441  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.317450  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:25.317458  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:25.317469  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:25.373328  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:25.373348  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:25.390032  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:25.390057  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:25.483290  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:25.471186   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.471638   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.473963   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.474270   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.475696   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:25.471186   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.471638   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.473963   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.474270   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.475696   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:25.483300  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:25.483311  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:25.544908  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:25.544930  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:28.078163  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:28.091034  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:28.091099  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:28.115911  359214 cri.go:89] found id: ""
	I1213 10:41:28.115925  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.115934  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:28.115940  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:28.116004  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:28.139316  359214 cri.go:89] found id: ""
	I1213 10:41:28.139330  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.139338  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:28.139343  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:28.139399  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:28.164405  359214 cri.go:89] found id: ""
	I1213 10:41:28.164420  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.164427  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:28.164434  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:28.164494  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:28.193103  359214 cri.go:89] found id: ""
	I1213 10:41:28.193117  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.193130  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:28.193136  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:28.193191  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:28.218193  359214 cri.go:89] found id: ""
	I1213 10:41:28.218207  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.218214  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:28.218219  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:28.218277  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:28.246727  359214 cri.go:89] found id: ""
	I1213 10:41:28.246741  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.246748  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:28.246754  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:28.246828  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:28.272720  359214 cri.go:89] found id: ""
	I1213 10:41:28.272735  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.272753  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:28.272761  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:28.272771  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:28.329731  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:28.329751  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:28.345935  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:28.345953  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:28.409004  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:28.400511   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.401329   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.403117   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.403657   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.404653   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:28.400511   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.401329   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.403117   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.403657   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.404653   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:28.409014  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:28.409024  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:28.475582  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:28.475603  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:31.008193  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:31.019100  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:31.019165  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:31.043886  359214 cri.go:89] found id: ""
	I1213 10:41:31.043907  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.043915  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:31.043921  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:31.043987  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:31.069993  359214 cri.go:89] found id: ""
	I1213 10:41:31.070008  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.070016  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:31.070022  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:31.070089  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:31.098048  359214 cri.go:89] found id: ""
	I1213 10:41:31.098075  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.098083  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:31.098089  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:31.098161  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:31.123592  359214 cri.go:89] found id: ""
	I1213 10:41:31.123608  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.123616  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:31.123621  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:31.123686  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:31.151147  359214 cri.go:89] found id: ""
	I1213 10:41:31.151163  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.151171  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:31.151177  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:31.151244  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:31.181236  359214 cri.go:89] found id: ""
	I1213 10:41:31.181257  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.181265  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:31.181270  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:31.181332  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:31.210269  359214 cri.go:89] found id: ""
	I1213 10:41:31.210283  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.210303  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:31.210311  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:31.210325  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:31.227244  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:31.227261  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:31.293720  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:31.285094   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.285961   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.287612   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.287962   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.289354   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:31.285094   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.285961   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.287612   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.287962   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.289354   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:31.293731  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:31.293745  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:31.357626  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:31.357648  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:31.386271  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:31.386288  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:33.948226  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:33.958367  359214 kubeadm.go:602] duration metric: took 4m4.333187147s to restartPrimaryControlPlane
	W1213 10:41:33.958431  359214 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 10:41:33.958502  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 10:41:34.375262  359214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:41:34.388893  359214 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:41:34.396960  359214 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:41:34.397012  359214 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:41:34.404696  359214 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:41:34.404706  359214 kubeadm.go:158] found existing configuration files:
	
	I1213 10:41:34.404755  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:41:34.412350  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:41:34.412405  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:41:34.419971  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:41:34.427828  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:41:34.427887  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:41:34.435644  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:41:34.443354  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:41:34.443408  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:41:34.451024  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:41:34.458860  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:41:34.458918  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:41:34.466249  359214 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:41:34.504797  359214 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:41:34.504845  359214 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:41:34.587434  359214 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:41:34.587499  359214 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:41:34.587534  359214 kubeadm.go:319] OS: Linux
	I1213 10:41:34.587577  359214 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:41:34.587624  359214 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:41:34.587670  359214 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:41:34.587717  359214 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:41:34.587764  359214 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:41:34.587816  359214 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:41:34.587860  359214 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:41:34.587906  359214 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:41:34.587951  359214 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:41:34.656000  359214 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:41:34.656112  359214 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:41:34.656196  359214 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:41:34.661831  359214 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:41:34.665544  359214 out.go:252]   - Generating certificates and keys ...
	I1213 10:41:34.665620  359214 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:41:34.665681  359214 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:41:34.665752  359214 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:41:34.665808  359214 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:41:34.665873  359214 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:41:34.665922  359214 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:41:34.665981  359214 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:41:34.666037  359214 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:41:34.666107  359214 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:41:34.666174  359214 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:41:34.666208  359214 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:41:34.666259  359214 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:41:35.121283  359214 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:41:35.663053  359214 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:41:35.746928  359214 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:41:35.962879  359214 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:41:36.165716  359214 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:41:36.166361  359214 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:41:36.169355  359214 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:41:36.172503  359214 out.go:252]   - Booting up control plane ...
	I1213 10:41:36.172623  359214 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:41:36.172875  359214 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:41:36.174488  359214 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:41:36.195010  359214 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:41:36.195108  359214 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:41:36.203505  359214 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:41:36.203828  359214 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:41:36.204072  359214 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:41:36.339853  359214 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:41:36.339968  359214 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:45:36.340589  359214 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00099636s
	I1213 10:45:36.340614  359214 kubeadm.go:319] 
	I1213 10:45:36.340667  359214 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:45:36.340697  359214 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:45:36.340795  359214 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:45:36.340800  359214 kubeadm.go:319] 
	I1213 10:45:36.340897  359214 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:45:36.340926  359214 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:45:36.340953  359214 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:45:36.340956  359214 kubeadm.go:319] 
	I1213 10:45:36.344674  359214 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:45:36.345121  359214 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:45:36.345236  359214 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:45:36.345471  359214 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:45:36.345476  359214 kubeadm.go:319] 
	I1213 10:45:36.345548  359214 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 10:45:36.345669  359214 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00099636s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 10:45:36.345754  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 10:45:36.752142  359214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:45:36.765694  359214 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:45:36.765753  359214 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:45:36.773442  359214 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:45:36.773451  359214 kubeadm.go:158] found existing configuration files:
	
	I1213 10:45:36.773504  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:45:36.781648  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:45:36.781706  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:45:36.789406  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:45:36.797582  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:45:36.797641  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:45:36.805463  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:45:36.813325  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:45:36.813378  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:45:36.820926  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:45:36.828930  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:45:36.828988  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:45:36.836622  359214 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:45:36.877023  359214 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:45:36.877075  359214 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:45:36.946303  359214 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:45:36.946364  359214 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:45:36.946398  359214 kubeadm.go:319] OS: Linux
	I1213 10:45:36.946444  359214 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:45:36.946489  359214 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:45:36.946532  359214 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:45:36.946576  359214 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:45:36.946620  359214 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:45:36.946665  359214 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:45:36.946727  359214 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:45:36.946771  359214 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:45:36.946813  359214 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:45:37.023251  359214 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:45:37.023367  359214 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:45:37.023453  359214 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:45:37.035188  359214 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:45:37.040505  359214 out.go:252]   - Generating certificates and keys ...
	I1213 10:45:37.040588  359214 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:45:37.040657  359214 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:45:37.040732  359214 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:45:37.040792  359214 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:45:37.040860  359214 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:45:37.040912  359214 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:45:37.040974  359214 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:45:37.041034  359214 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:45:37.041112  359214 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:45:37.041183  359214 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:45:37.041219  359214 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:45:37.041274  359214 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:45:37.085508  359214 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:45:37.524146  359214 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:45:37.643175  359214 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:45:38.077377  359214 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:45:38.482147  359214 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:45:38.482682  359214 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:45:38.485202  359214 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:45:38.490562  359214 out.go:252]   - Booting up control plane ...
	I1213 10:45:38.490673  359214 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:45:38.490778  359214 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:45:38.490854  359214 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:45:38.510040  359214 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:45:38.510136  359214 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:45:38.518983  359214 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:45:38.519096  359214 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:45:38.519153  359214 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:45:38.652209  359214 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:45:38.652350  359214 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:49:38.651567  359214 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001187482s
	I1213 10:49:38.651592  359214 kubeadm.go:319] 
	I1213 10:49:38.651654  359214 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:49:38.651686  359214 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:49:38.651792  359214 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:49:38.651797  359214 kubeadm.go:319] 
	I1213 10:49:38.651939  359214 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:49:38.651995  359214 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:49:38.652034  359214 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:49:38.652037  359214 kubeadm.go:319] 
	I1213 10:49:38.656860  359214 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:49:38.657251  359214 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:49:38.657352  359214 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:49:38.657572  359214 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:49:38.657576  359214 kubeadm.go:319] 
	I1213 10:49:38.657639  359214 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:49:38.657718  359214 kubeadm.go:403] duration metric: took 12m9.068082439s to StartCluster
	I1213 10:49:38.657750  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:49:38.657821  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:49:38.689768  359214 cri.go:89] found id: ""
	I1213 10:49:38.689783  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.689798  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:49:38.689803  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:49:38.689865  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:49:38.719427  359214 cri.go:89] found id: ""
	I1213 10:49:38.719441  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.719449  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:49:38.719455  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:49:38.719513  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:49:38.747452  359214 cri.go:89] found id: ""
	I1213 10:49:38.747466  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.747474  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:49:38.747480  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:49:38.747544  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:49:38.772270  359214 cri.go:89] found id: ""
	I1213 10:49:38.772286  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.772293  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:49:38.772298  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:49:38.772358  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:49:38.796548  359214 cri.go:89] found id: ""
	I1213 10:49:38.796562  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.796570  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:49:38.796575  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:49:38.796633  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:49:38.825383  359214 cri.go:89] found id: ""
	I1213 10:49:38.825397  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.825404  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:49:38.825410  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:49:38.825467  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:49:38.854743  359214 cri.go:89] found id: ""
	I1213 10:49:38.854758  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.854765  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:49:38.854775  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:49:38.854785  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:49:38.911438  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:49:38.911459  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:49:38.928194  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:49:38.928212  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:49:38.993056  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:49:38.985025   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.985836   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.987445   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.987763   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.989301   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:49:38.985025   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.985836   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.987445   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.987763   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.989301   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:49:38.993068  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:49:38.993079  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:49:39.059560  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:49:39.059584  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:49:39.090490  359214 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:49:39.090521  359214 out.go:285] * 
	W1213 10:49:39.090586  359214 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:49:39.090603  359214 out.go:285] * 
	W1213 10:49:39.092733  359214 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:49:39.097735  359214 out.go:203] 
	W1213 10:49:39.101721  359214 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:49:39.101772  359214 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:49:39.101799  359214 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:49:39.104924  359214 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861227644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861318114Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861438764Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861513571Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861578449Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861642483Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861707304Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861776350Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861845545Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861934818Z" level=info msg="Connect containerd service"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.862289545Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.862951451Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.874919104Z" level=info msg="Start subscribing containerd event"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.875103516Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.875569851Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.881349344Z" level=info msg="Start recovering state"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.920785039Z" level=info msg="Start event monitor"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921012364Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921112731Z" level=info msg="Start streaming server"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921198171Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921421730Z" level=info msg="runtime interface starting up..."
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921496201Z" level=info msg="starting plugins..."
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921561104Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:37:27 functional-652709 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.922785206Z" level=info msg="containerd successfully booted in 0.088911s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:49:40.333497   21120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:40.334223   21120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:40.336003   21120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:40.336616   21120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:40.338323   21120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +12.907693] overlayfs: idmapped layers are currently not supported
	[Dec13 09:25] overlayfs: idmapped layers are currently not supported
	[ +26.192425] overlayfs: idmapped layers are currently not supported
	[Dec13 09:26] overlayfs: idmapped layers are currently not supported
	[ +25.729788] overlayfs: idmapped layers are currently not supported
	[Dec13 09:27] overlayfs: idmapped layers are currently not supported
	[Dec13 09:28] overlayfs: idmapped layers are currently not supported
	[Dec13 09:31] overlayfs: idmapped layers are currently not supported
	[Dec13 09:32] overlayfs: idmapped layers are currently not supported
	[ +17.979093] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec13 09:43] overlayfs: idmapped layers are currently not supported
	[Dec13 09:45] overlayfs: idmapped layers are currently not supported
	[ +25.885102] overlayfs: idmapped layers are currently not supported
	[Dec13 09:46] overlayfs: idmapped layers are currently not supported
	[ +22.078149] overlayfs: idmapped layers are currently not supported
	[Dec13 09:47] overlayfs: idmapped layers are currently not supported
	[Dec13 09:48] overlayfs: idmapped layers are currently not supported
	[Dec13 09:49] overlayfs: idmapped layers are currently not supported
	[Dec13 09:51] overlayfs: idmapped layers are currently not supported
	[ +17.043564] overlayfs: idmapped layers are currently not supported
	[Dec13 09:52] overlayfs: idmapped layers are currently not supported
	[Dec13 09:53] overlayfs: idmapped layers are currently not supported
	[Dec13 10:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:19] hrtimer: interrupt took 21247146 ns
	
	
	==> kernel <==
	 10:49:40 up  3:32,  0 user,  load average: 0.11, 0.19, 0.47
	Linux functional-652709 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:49:37 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:49:37 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 13 10:49:37 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:37 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:37 functional-652709 kubelet[20923]: E1213 10:49:37.960664   20923 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:49:37 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:49:37 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:49:38 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 13 10:49:38 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:38 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:38 functional-652709 kubelet[20929]: E1213 10:49:38.726051   20929 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:49:38 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:49:38 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:49:39 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 13 10:49:39 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:39 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:39 functional-652709 kubelet[21027]: E1213 10:49:39.459826   21027 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:49:39 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:49:39 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:49:40 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 13 10:49:40 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:40 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:40 functional-652709 kubelet[21092]: E1213 10:49:40.223722   21092 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:49:40 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:49:40 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709: exit status 2 (390.06131ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-652709" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (736.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-652709 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-652709 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (62.903172ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-652709 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-652709
helpers_test.go:244: (dbg) docker inspect functional-652709:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f",
	        "Created": "2025-12-13T10:22:44.366993781Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 347931,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:22:44.437030763Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/hosts",
	        "LogPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f-json.log",
	        "Name": "/functional-652709",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-652709:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-652709",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f",
	                "LowerDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095-init/diff:/var/lib/docker/overlay2/5d0ca86320f2c06765995d644a73af30cd6ddb36f5c1a2a6b1ebb24af53cdabd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/merged",
	                "UpperDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/diff",
	                "WorkDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-652709",
	                "Source": "/var/lib/docker/volumes/functional-652709/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-652709",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-652709",
	                "name.minikube.sigs.k8s.io": "functional-652709",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "52e527b5bd789a02eb7efb651200033ed4929e5fc7545e9df042d3f777cc9782",
	            "SandboxKey": "/var/run/docker/netns/52e527b5bd78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-652709": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:23:08:9e:cb:13",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "344f2b940117dadb28d1ef1328f911c0446307288fdfafebfe59f38e473f79cb",
	                    "EndpointID": "8954f96e5987202be5715e7023384fe862744778b2520bccba28c57814f0980f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-652709",
	                        "0f6101071ca2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-652709 -n functional-652709
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-652709 -n functional-652709: exit status 2 (329.970508ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-319494 image ls --format yaml --alsologtostderr                                                                                              │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ ssh     │ functional-319494 ssh pgrep buildkitd                                                                                                                   │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │                     │
	│ image   │ functional-319494 image ls --format json --alsologtostderr                                                                                              │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image   │ functional-319494 image build -t localhost/my-image:functional-319494 testdata/build --alsologtostderr                                                  │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image   │ functional-319494 image ls --format table --alsologtostderr                                                                                             │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ image   │ functional-319494 image ls                                                                                                                              │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ delete  │ -p functional-319494                                                                                                                                    │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
	│ start   │ -p functional-652709 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │                     │
	│ start   │ -p functional-652709 --alsologtostderr -v=8                                                                                                             │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:31 UTC │                     │
	│ cache   │ functional-652709 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ functional-652709 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ functional-652709 cache add registry.k8s.io/pause:latest                                                                                                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ functional-652709 cache add minikube-local-cache-test:functional-652709                                                                                 │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ functional-652709 cache delete minikube-local-cache-test:functional-652709                                                                              │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ ssh     │ functional-652709 ssh sudo crictl images                                                                                                                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ ssh     │ functional-652709 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ ssh     │ functional-652709 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │                     │
	│ cache   │ functional-652709 cache reload                                                                                                                          │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ ssh     │ functional-652709 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ kubectl │ functional-652709 kubectl -- --context functional-652709 get pods                                                                                       │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │                     │
	│ start   │ -p functional-652709 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:37:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:37:25.138350  359214 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:37:25.138465  359214 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:37:25.138469  359214 out.go:374] Setting ErrFile to fd 2...
	I1213 10:37:25.138473  359214 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:37:25.138742  359214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 10:37:25.139091  359214 out.go:368] Setting JSON to false
	I1213 10:37:25.139911  359214 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11998,"bootTime":1765610247,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 10:37:25.139964  359214 start.go:143] virtualization:  
	I1213 10:37:25.143535  359214 out.go:179] * [functional-652709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:37:25.146407  359214 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:37:25.146500  359214 notify.go:221] Checking for updates...
	I1213 10:37:25.152371  359214 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:37:25.155287  359214 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:37:25.158064  359214 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 10:37:25.162885  359214 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:37:25.165865  359214 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:37:25.169282  359214 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:37:25.169378  359214 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:37:25.203946  359214 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:37:25.204073  359214 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:37:25.282140  359214 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 10:37:25.272517516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:37:25.282233  359214 docker.go:319] overlay module found
	I1213 10:37:25.285314  359214 out.go:179] * Using the docker driver based on existing profile
	I1213 10:37:25.288091  359214 start.go:309] selected driver: docker
	I1213 10:37:25.288098  359214 start.go:927] validating driver "docker" against &{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:37:25.288215  359214 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:37:25.288310  359214 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:37:25.346233  359214 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 10:37:25.336833323 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:37:25.346649  359214 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:37:25.346672  359214 cni.go:84] Creating CNI manager for ""
	I1213 10:37:25.346746  359214 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:37:25.346788  359214 start.go:353] cluster config:
	{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:37:25.351648  359214 out.go:179] * Starting "functional-652709" primary control-plane node in "functional-652709" cluster
	I1213 10:37:25.354472  359214 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 10:37:25.357365  359214 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:37:25.360240  359214 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:37:25.360279  359214 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 10:37:25.360290  359214 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:37:25.360305  359214 cache.go:65] Caching tarball of preloaded images
	I1213 10:37:25.360390  359214 preload.go:238] Found /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 10:37:25.360398  359214 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 10:37:25.360508  359214 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/config.json ...
	I1213 10:37:25.379669  359214 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:37:25.379680  359214 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:37:25.379701  359214 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:37:25.379731  359214 start.go:360] acquireMachinesLock for functional-652709: {Name:mk6e8c40fbbb5af0bb2468340fd710875030300d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:37:25.379795  359214 start.go:364] duration metric: took 46.958µs to acquireMachinesLock for "functional-652709"
	I1213 10:37:25.379812  359214 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:37:25.379817  359214 fix.go:54] fixHost starting: 
	I1213 10:37:25.380078  359214 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:37:25.396614  359214 fix.go:112] recreateIfNeeded on functional-652709: state=Running err=<nil>
	W1213 10:37:25.396632  359214 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:37:25.399750  359214 out.go:252] * Updating the running docker "functional-652709" container ...
	I1213 10:37:25.399771  359214 machine.go:94] provisionDockerMachine start ...
	I1213 10:37:25.399844  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:25.416990  359214 main.go:143] libmachine: Using SSH client type: native
	I1213 10:37:25.417324  359214 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:37:25.417330  359214 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:37:25.566232  359214 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-652709
	
	I1213 10:37:25.566247  359214 ubuntu.go:182] provisioning hostname "functional-652709"
	I1213 10:37:25.566312  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:25.583930  359214 main.go:143] libmachine: Using SSH client type: native
	I1213 10:37:25.584239  359214 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:37:25.584247  359214 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-652709 && echo "functional-652709" | sudo tee /etc/hostname
	I1213 10:37:25.743712  359214 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-652709
	
	I1213 10:37:25.743781  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:25.761387  359214 main.go:143] libmachine: Using SSH client type: native
	I1213 10:37:25.761683  359214 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:37:25.761697  359214 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-652709' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-652709/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-652709' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:37:25.915528  359214 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:37:25.915543  359214 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 10:37:25.915567  359214 ubuntu.go:190] setting up certificates
	I1213 10:37:25.915589  359214 provision.go:84] configureAuth start
	I1213 10:37:25.915650  359214 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-652709
	I1213 10:37:25.937241  359214 provision.go:143] copyHostCerts
	I1213 10:37:25.937315  359214 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 10:37:25.937323  359214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 10:37:25.937397  359214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 10:37:25.937493  359214 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 10:37:25.937497  359214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 10:37:25.937521  359214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 10:37:25.937570  359214 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 10:37:25.937573  359214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 10:37:25.937593  359214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 10:37:25.937635  359214 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.functional-652709 san=[127.0.0.1 192.168.49.2 functional-652709 localhost minikube]
	I1213 10:37:26.244127  359214 provision.go:177] copyRemoteCerts
	I1213 10:37:26.244186  359214 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:37:26.244225  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:26.264658  359214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:37:26.370401  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:37:26.387044  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:37:26.404259  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:37:26.421389  359214 provision.go:87] duration metric: took 505.777833ms to configureAuth
	I1213 10:37:26.421407  359214 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:37:26.421614  359214 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:37:26.421620  359214 machine.go:97] duration metric: took 1.021844371s to provisionDockerMachine
	I1213 10:37:26.421627  359214 start.go:293] postStartSetup for "functional-652709" (driver="docker")
	I1213 10:37:26.421636  359214 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:37:26.421692  359214 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:37:26.421728  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:26.439115  359214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:37:26.542461  359214 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:37:26.545680  359214 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:37:26.545698  359214 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:37:26.545710  359214 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 10:37:26.545763  359214 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 10:37:26.545836  359214 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 10:37:26.545911  359214 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts -> hosts in /etc/test/nested/copy/308915
	I1213 10:37:26.545959  359214 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/308915
	I1213 10:37:26.553760  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 10:37:26.571190  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts --> /etc/test/nested/copy/308915/hosts (40 bytes)
	I1213 10:37:26.588882  359214 start.go:296] duration metric: took 167.239997ms for postStartSetup
	I1213 10:37:26.588951  359214 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:37:26.588988  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:26.606145  359214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:37:26.708907  359214 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:37:26.713681  359214 fix.go:56] duration metric: took 1.333856829s for fixHost
	I1213 10:37:26.713698  359214 start.go:83] releasing machines lock for "functional-652709", held for 1.333895015s
	I1213 10:37:26.713781  359214 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-652709
	I1213 10:37:26.733362  359214 ssh_runner.go:195] Run: cat /version.json
	I1213 10:37:26.733405  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:26.733670  359214 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:37:26.733727  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:26.755898  359214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:37:26.764378  359214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:37:26.858420  359214 ssh_runner.go:195] Run: systemctl --version
	I1213 10:37:26.952524  359214 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:37:26.956969  359214 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:37:26.957030  359214 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:37:26.964724  359214 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:37:26.964738  359214 start.go:496] detecting cgroup driver to use...
	I1213 10:37:26.964768  359214 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:37:26.964823  359214 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 10:37:26.980031  359214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:37:26.993058  359214 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:37:26.993140  359214 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:37:27.016019  359214 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:37:27.029352  359214 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:37:27.143876  359214 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:37:27.259911  359214 docker.go:234] disabling docker service ...
	I1213 10:37:27.259973  359214 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:37:27.275304  359214 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:37:27.288715  359214 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:37:27.403391  359214 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:37:27.538286  359214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:37:27.551384  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:37:27.565344  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:37:27.574020  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:37:27.583189  359214 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:37:27.583255  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:37:27.591895  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:37:27.600966  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:37:27.609996  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:37:27.618821  359214 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:37:27.626864  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:37:27.635612  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:37:27.644477  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:37:27.653477  359214 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:37:27.661005  359214 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:37:27.668365  359214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:37:27.776281  359214 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:37:27.924718  359214 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 10:37:27.924777  359214 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 10:37:27.928729  359214 start.go:564] Will wait 60s for crictl version
	I1213 10:37:27.928789  359214 ssh_runner.go:195] Run: which crictl
	I1213 10:37:27.932637  359214 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:37:27.956729  359214 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 10:37:27.956786  359214 ssh_runner.go:195] Run: containerd --version
	I1213 10:37:27.979747  359214 ssh_runner.go:195] Run: containerd --version
	I1213 10:37:28.007018  359214 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 10:37:28.009973  359214 cli_runner.go:164] Run: docker network inspect functional-652709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:37:28.026979  359214 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:37:28.034215  359214 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 10:37:28.037114  359214 kubeadm.go:884] updating cluster {Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:37:28.037277  359214 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:37:28.037366  359214 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:37:28.069735  359214 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:37:28.069748  359214 containerd.go:534] Images already preloaded, skipping extraction
	I1213 10:37:28.069804  359214 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:37:28.094782  359214 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:37:28.094795  359214 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:37:28.094801  359214 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 10:37:28.094901  359214 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-652709 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:37:28.094963  359214 ssh_runner.go:195] Run: sudo crictl info
	I1213 10:37:28.123071  359214 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 10:37:28.123096  359214 cni.go:84] Creating CNI manager for ""
	I1213 10:37:28.123104  359214 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:37:28.123112  359214 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:37:28.123134  359214 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-652709 NodeName:functional-652709 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:37:28.123244  359214 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-652709"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:37:28.123313  359214 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:37:28.131175  359214 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:37:28.131238  359214 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:37:28.138792  359214 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 10:37:28.151537  359214 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:37:28.169495  359214 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1213 10:37:28.184364  359214 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:37:28.188525  359214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:37:28.305096  359214 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:37:28.912534  359214 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709 for IP: 192.168.49.2
	I1213 10:37:28.912575  359214 certs.go:195] generating shared ca certs ...
	I1213 10:37:28.912591  359214 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:37:28.912719  359214 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 10:37:28.912771  359214 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 10:37:28.912778  359214 certs.go:257] generating profile certs ...
	I1213 10:37:28.912857  359214 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.key
	I1213 10:37:28.912917  359214 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key.86e7afd1
	I1213 10:37:28.912954  359214 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key
	I1213 10:37:28.913063  359214 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 10:37:28.913092  359214 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 10:37:28.913099  359214 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:37:28.913124  359214 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:37:28.913151  359214 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:37:28.913174  359214 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 10:37:28.913221  359214 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 10:37:28.913808  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:37:28.931820  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:37:28.949028  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:37:28.966476  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:37:28.984047  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:37:29.002075  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:37:29.020305  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:37:29.037811  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:37:29.054630  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:37:29.071547  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 10:37:29.088633  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 10:37:29.105638  359214 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:37:29.118149  359214 ssh_runner.go:195] Run: openssl version
	I1213 10:37:29.124118  359214 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:37:29.131416  359214 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:37:29.138705  359214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:37:29.142329  359214 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:37:29.142388  359214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:37:29.183023  359214 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:37:29.190485  359214 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 10:37:29.197738  359214 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 10:37:29.205192  359214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 10:37:29.209070  359214 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 10:37:29.209124  359214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 10:37:29.250234  359214 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:37:29.257744  359214 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 10:37:29.265022  359214 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 10:37:29.272593  359214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 10:37:29.276820  359214 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 10:37:29.276874  359214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 10:37:29.317834  359214 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:37:29.325126  359214 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:37:29.328844  359214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:37:29.369639  359214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:37:29.410192  359214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:37:29.467336  359214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:37:29.508158  359214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:37:29.549013  359214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:37:29.589618  359214 kubeadm.go:401] StartCluster: {Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:37:29.589715  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 10:37:29.589775  359214 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:37:29.617382  359214 cri.go:89] found id: ""
	I1213 10:37:29.617441  359214 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:37:29.625150  359214 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:37:29.625165  359214 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:37:29.625217  359214 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:37:29.632536  359214 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:37:29.633037  359214 kubeconfig.go:125] found "functional-652709" server: "https://192.168.49.2:8441"
	I1213 10:37:29.635539  359214 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:37:29.643331  359214 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 10:22:52.033435592 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 10:37:28.181843120 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 10:37:29.643344  359214 kubeadm.go:1161] stopping kube-system containers ...
	I1213 10:37:29.643355  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1213 10:37:29.643418  359214 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:37:29.681117  359214 cri.go:89] found id: ""
	I1213 10:37:29.681185  359214 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 10:37:29.700348  359214 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:37:29.708464  359214 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 13 10:26 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 13 10:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 13 10:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 13 10:27 /etc/kubernetes/scheduler.conf
	
	I1213 10:37:29.708519  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:37:29.716973  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:37:29.724972  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:37:29.725027  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:37:29.732670  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:37:29.740374  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:37:29.740426  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:37:29.747796  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:37:29.755836  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:37:29.755895  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:37:29.763121  359214 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:37:29.770676  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:37:29.815944  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:37:31.022963  359214 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.206994632s)
	I1213 10:37:31.023029  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:37:31.239388  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:37:31.313712  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:37:31.358670  359214 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:37:31.358755  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:31.859658  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:32.358989  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:32.859540  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:33.359279  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:33.859755  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:34.358874  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:34.859660  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:35.358974  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:35.859781  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:36.359545  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:36.858931  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:37.359594  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:37.858997  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:38.359204  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:38.858979  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:39.358917  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:39.859473  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:40.359538  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:40.859107  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:41.358909  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:41.859704  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:42.359845  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:42.858940  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:43.359903  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:43.859817  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:44.359835  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:44.859527  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:45.359678  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:45.859496  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:46.359291  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:46.858996  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:47.358908  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:47.859899  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:48.358923  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:48.859520  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:49.358971  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:49.859614  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:50.359594  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:50.859684  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:51.359555  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:51.859532  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:52.359643  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:52.858959  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:53.359880  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:53.859709  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:54.359771  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:54.859730  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:55.359785  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:55.858870  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:56.359649  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:56.858975  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:57.358923  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:57.858974  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:58.359777  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:58.859581  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:59.359156  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:59.858896  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:00.358974  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:00.859820  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:01.359786  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:01.858901  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:02.359740  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:02.858926  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:03.359018  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:03.859003  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:04.358882  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:04.859861  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:05.358860  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:05.859819  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:06.358836  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:06.859844  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:07.359700  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:07.859637  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:08.358985  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:08.859911  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:09.358995  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:09.859620  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:10.359502  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:10.859134  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:11.358958  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:11.859244  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:12.359094  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:12.858981  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:13.359211  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:13.859751  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:14.358846  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:14.859594  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:15.358998  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:15.859726  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:16.358944  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:16.859375  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:17.358986  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:17.859765  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:18.358918  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:18.859799  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:19.359117  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:19.859388  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:20.359631  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:20.858965  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:21.358912  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:21.858871  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:22.359799  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:22.859665  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:23.359516  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:23.859788  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:24.359018  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:24.858866  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:25.359003  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:25.859726  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:26.358952  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:26.859653  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:27.359769  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:27.859360  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:28.358958  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:28.859685  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:29.359809  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:29.859773  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:30.359871  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:30.859558  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:31.359176  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:31.359252  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:31.383827  359214 cri.go:89] found id: ""
	I1213 10:38:31.383841  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.383849  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:31.383855  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:31.383917  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:31.412267  359214 cri.go:89] found id: ""
	I1213 10:38:31.412291  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.412300  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:31.412305  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:31.412364  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:31.437736  359214 cri.go:89] found id: ""
	I1213 10:38:31.437751  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.437758  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:31.437763  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:31.437824  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:31.461791  359214 cri.go:89] found id: ""
	I1213 10:38:31.461806  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.461813  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:31.461818  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:31.461880  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:31.488695  359214 cri.go:89] found id: ""
	I1213 10:38:31.488709  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.488717  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:31.488722  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:31.488789  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:31.517230  359214 cri.go:89] found id: ""
	I1213 10:38:31.517245  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.517274  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:31.517281  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:31.517340  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:31.541920  359214 cri.go:89] found id: ""
	I1213 10:38:31.541934  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.541942  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:31.541951  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:31.541962  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:31.558143  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:31.558161  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:31.623427  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:31.614536   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.615101   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.616803   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.617190   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.619517   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:31.614536   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.615101   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.616803   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.617190   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.619517   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:31.623438  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:31.623449  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:31.686774  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:31.686794  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:31.719218  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:31.719234  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:34.280556  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:34.293171  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:34.293241  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:34.319161  359214 cri.go:89] found id: ""
	I1213 10:38:34.319176  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.319183  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:34.319189  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:34.319245  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:34.348792  359214 cri.go:89] found id: ""
	I1213 10:38:34.348806  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.348814  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:34.348819  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:34.348879  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:34.374794  359214 cri.go:89] found id: ""
	I1213 10:38:34.374809  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.374816  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:34.374822  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:34.374883  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:34.399481  359214 cri.go:89] found id: ""
	I1213 10:38:34.399496  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.399503  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:34.399509  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:34.399567  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:34.424169  359214 cri.go:89] found id: ""
	I1213 10:38:34.424184  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.424191  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:34.424196  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:34.424300  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:34.449747  359214 cri.go:89] found id: ""
	I1213 10:38:34.449762  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.449769  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:34.449775  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:34.449839  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:34.475244  359214 cri.go:89] found id: ""
	I1213 10:38:34.475259  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.475266  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:34.475274  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:34.475284  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:34.531644  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:34.531665  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:34.548876  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:34.548895  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:34.612831  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:34.605081   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.605477   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.607131   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.607458   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.609038   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:34.605081   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.605477   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.607131   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.607458   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.609038   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:34.612842  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:34.612853  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:34.677588  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:34.677607  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:37.204561  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:37.215900  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:37.215960  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:37.240644  359214 cri.go:89] found id: ""
	I1213 10:38:37.240679  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.240697  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:37.240710  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:37.240796  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:37.265154  359214 cri.go:89] found id: ""
	I1213 10:38:37.265168  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.265176  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:37.265181  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:37.265240  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:37.290309  359214 cri.go:89] found id: ""
	I1213 10:38:37.290323  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.290331  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:37.290336  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:37.290402  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:37.314207  359214 cri.go:89] found id: ""
	I1213 10:38:37.314222  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.314229  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:37.314235  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:37.314294  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:37.338622  359214 cri.go:89] found id: ""
	I1213 10:38:37.338637  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.338645  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:37.338651  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:37.338731  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:37.362866  359214 cri.go:89] found id: ""
	I1213 10:38:37.362881  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.362888  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:37.362894  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:37.362954  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:37.388313  359214 cri.go:89] found id: ""
	I1213 10:38:37.388327  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.388335  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:37.388343  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:37.388355  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:37.405018  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:37.405035  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:37.467928  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:37.459672   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.460192   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.461721   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.462120   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.463584   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:37.459672   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.460192   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.461721   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.462120   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.463584   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:37.467941  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:37.467952  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:37.536764  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:37.536793  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:37.565751  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:37.565767  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:40.124516  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:40.136075  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:40.136155  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:40.180740  359214 cri.go:89] found id: ""
	I1213 10:38:40.180755  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.180763  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:40.180771  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:40.180844  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:40.214880  359214 cri.go:89] found id: ""
	I1213 10:38:40.214894  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.214912  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:40.214918  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:40.214986  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:40.255502  359214 cri.go:89] found id: ""
	I1213 10:38:40.255516  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.255524  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:40.255529  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:40.255590  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:40.279736  359214 cri.go:89] found id: ""
	I1213 10:38:40.279750  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.279761  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:40.279766  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:40.279827  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:40.305162  359214 cri.go:89] found id: ""
	I1213 10:38:40.305186  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.305194  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:40.305199  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:40.305268  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:40.330075  359214 cri.go:89] found id: ""
	I1213 10:38:40.330089  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.330097  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:40.330103  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:40.330171  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:40.356608  359214 cri.go:89] found id: ""
	I1213 10:38:40.356623  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.356631  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:40.356639  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:40.356649  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:40.386833  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:40.386850  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:40.442503  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:40.442523  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:40.458859  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:40.458875  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:40.526393  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:40.517849   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.518498   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.520192   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.520775   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.522583   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:40.517849   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.518498   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.520192   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.520775   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.522583   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:40.526415  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:40.526425  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:43.093725  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:43.104280  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:43.104351  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:43.128552  359214 cri.go:89] found id: ""
	I1213 10:38:43.128566  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.128574  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:43.128579  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:43.128637  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:43.153838  359214 cri.go:89] found id: ""
	I1213 10:38:43.153853  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.153861  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:43.153866  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:43.153925  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:43.182604  359214 cri.go:89] found id: ""
	I1213 10:38:43.182617  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.182624  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:43.182631  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:43.182751  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:43.212454  359214 cri.go:89] found id: ""
	I1213 10:38:43.212481  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.212489  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:43.212501  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:43.212572  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:43.239973  359214 cri.go:89] found id: ""
	I1213 10:38:43.239987  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.240005  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:43.240011  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:43.240074  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:43.264733  359214 cri.go:89] found id: ""
	I1213 10:38:43.264748  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.264755  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:43.264767  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:43.264826  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:43.291333  359214 cri.go:89] found id: ""
	I1213 10:38:43.291347  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.291354  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:43.291362  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:43.291372  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:43.348037  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:43.348057  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:43.364359  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:43.364377  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:43.426788  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:43.418519   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.419245   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.420917   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.421479   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.423061   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:43.418519   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.419245   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.420917   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.421479   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.423061   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:43.426809  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:43.426819  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:43.492237  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:43.492258  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:46.019179  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:46.029376  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:46.029454  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:46.053215  359214 cri.go:89] found id: ""
	I1213 10:38:46.053229  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.053236  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:46.053242  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:46.053315  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:46.078867  359214 cri.go:89] found id: ""
	I1213 10:38:46.078882  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.078889  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:46.078895  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:46.078955  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:46.104476  359214 cri.go:89] found id: ""
	I1213 10:38:46.104490  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.104498  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:46.104503  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:46.104584  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:46.132735  359214 cri.go:89] found id: ""
	I1213 10:38:46.132750  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.132758  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:46.132763  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:46.132844  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:46.171837  359214 cri.go:89] found id: ""
	I1213 10:38:46.171852  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.171859  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:46.171865  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:46.171925  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:46.214470  359214 cri.go:89] found id: ""
	I1213 10:38:46.214484  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.214501  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:46.214508  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:46.214581  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:46.241616  359214 cri.go:89] found id: ""
	I1213 10:38:46.241631  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.241638  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:46.241646  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:46.241657  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:46.269691  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:46.269717  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:46.326434  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:46.326454  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:46.342808  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:46.342825  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:46.406446  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:46.398462   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.399218   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.400888   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.401204   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.402682   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:46.398462   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.399218   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.400888   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.401204   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.402682   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:46.406456  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:46.406466  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:48.970215  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:48.980360  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:48.980424  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:49.007836  359214 cri.go:89] found id: ""
	I1213 10:38:49.007857  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.007865  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:49.007870  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:49.007930  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:49.032102  359214 cri.go:89] found id: ""
	I1213 10:38:49.032116  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.032124  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:49.032129  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:49.032188  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:49.056548  359214 cri.go:89] found id: ""
	I1213 10:38:49.056562  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.056577  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:49.056582  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:49.056638  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:49.080172  359214 cri.go:89] found id: ""
	I1213 10:38:49.080186  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.080194  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:49.080199  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:49.080257  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:49.104358  359214 cri.go:89] found id: ""
	I1213 10:38:49.104372  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.104380  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:49.104385  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:49.104456  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:49.131026  359214 cri.go:89] found id: ""
	I1213 10:38:49.131041  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.131048  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:49.131054  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:49.131111  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:49.155850  359214 cri.go:89] found id: ""
	I1213 10:38:49.155865  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.155872  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:49.155881  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:49.155891  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:49.237398  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:49.228981   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.229481   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.231324   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.231926   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.233542   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:49.228981   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.229481   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.231324   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.231926   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.233542   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:49.237409  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:49.237422  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:49.300000  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:49.300020  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:49.330957  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:49.330973  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:49.392815  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:49.392834  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:51.909143  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:51.919406  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:51.919465  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:51.948136  359214 cri.go:89] found id: ""
	I1213 10:38:51.948150  359214 logs.go:282] 0 containers: []
	W1213 10:38:51.948157  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:51.948163  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:51.948221  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:51.972396  359214 cri.go:89] found id: ""
	I1213 10:38:51.972411  359214 logs.go:282] 0 containers: []
	W1213 10:38:51.972420  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:51.972424  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:51.972497  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:52.003416  359214 cri.go:89] found id: ""
	I1213 10:38:52.003433  359214 logs.go:282] 0 containers: []
	W1213 10:38:52.003442  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:52.003449  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:52.003533  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:52.031359  359214 cri.go:89] found id: ""
	I1213 10:38:52.031374  359214 logs.go:282] 0 containers: []
	W1213 10:38:52.031382  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:52.031387  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:52.031447  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:52.056514  359214 cri.go:89] found id: ""
	I1213 10:38:52.056529  359214 logs.go:282] 0 containers: []
	W1213 10:38:52.056536  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:52.056541  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:52.056619  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:52.085509  359214 cri.go:89] found id: ""
	I1213 10:38:52.085524  359214 logs.go:282] 0 containers: []
	W1213 10:38:52.085533  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:52.085539  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:52.085613  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:52.113117  359214 cri.go:89] found id: ""
	I1213 10:38:52.113131  359214 logs.go:282] 0 containers: []
	W1213 10:38:52.113138  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:52.113146  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:52.113157  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:52.129605  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:52.129627  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:52.198531  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:52.190917   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.191383   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.192873   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.193169   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.194579   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:52.190917   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.191383   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.192873   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.193169   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.194579   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:52.198542  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:52.198554  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:52.267617  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:52.267640  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:52.301362  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:52.301379  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:54.858319  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:54.868860  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:54.868931  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:54.895935  359214 cri.go:89] found id: ""
	I1213 10:38:54.895949  359214 logs.go:282] 0 containers: []
	W1213 10:38:54.895956  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:54.895962  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:54.896020  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:54.924712  359214 cri.go:89] found id: ""
	I1213 10:38:54.924727  359214 logs.go:282] 0 containers: []
	W1213 10:38:54.924734  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:54.924740  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:54.924807  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:54.949662  359214 cri.go:89] found id: ""
	I1213 10:38:54.949677  359214 logs.go:282] 0 containers: []
	W1213 10:38:54.949685  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:54.949690  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:54.949758  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:54.973861  359214 cri.go:89] found id: ""
	I1213 10:38:54.973876  359214 logs.go:282] 0 containers: []
	W1213 10:38:54.973883  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:54.973889  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:54.973949  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:54.999167  359214 cri.go:89] found id: ""
	I1213 10:38:54.999182  359214 logs.go:282] 0 containers: []
	W1213 10:38:54.999190  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:54.999196  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:54.999267  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:55.030614  359214 cri.go:89] found id: ""
	I1213 10:38:55.030630  359214 logs.go:282] 0 containers: []
	W1213 10:38:55.030638  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:55.030644  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:55.030764  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:55.059903  359214 cri.go:89] found id: ""
	I1213 10:38:55.059918  359214 logs.go:282] 0 containers: []
	W1213 10:38:55.059925  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:55.059933  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:55.059943  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:55.129097  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:55.129156  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:55.157699  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:55.157717  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:55.226688  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:55.226706  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:55.244093  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:55.244111  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:55.309464  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:55.300977   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.301803   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.303423   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.304086   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.305672   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:55.300977   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.301803   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.303423   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.304086   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.305672   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:57.809736  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:57.819959  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:57.820025  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:57.844184  359214 cri.go:89] found id: ""
	I1213 10:38:57.844198  359214 logs.go:282] 0 containers: []
	W1213 10:38:57.844206  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:57.844211  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:57.844270  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:57.869511  359214 cri.go:89] found id: ""
	I1213 10:38:57.869524  359214 logs.go:282] 0 containers: []
	W1213 10:38:57.869532  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:57.869553  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:57.869613  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:57.895212  359214 cri.go:89] found id: ""
	I1213 10:38:57.895226  359214 logs.go:282] 0 containers: []
	W1213 10:38:57.895234  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:57.895239  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:57.895298  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:57.919989  359214 cri.go:89] found id: ""
	I1213 10:38:57.920004  359214 logs.go:282] 0 containers: []
	W1213 10:38:57.920011  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:57.920018  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:57.920076  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:57.948250  359214 cri.go:89] found id: ""
	I1213 10:38:57.948263  359214 logs.go:282] 0 containers: []
	W1213 10:38:57.948271  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:57.948277  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:57.948334  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:57.974322  359214 cri.go:89] found id: ""
	I1213 10:38:57.974337  359214 logs.go:282] 0 containers: []
	W1213 10:38:57.974345  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:57.974350  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:57.974423  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:58.005721  359214 cri.go:89] found id: ""
	I1213 10:38:58.005737  359214 logs.go:282] 0 containers: []
	W1213 10:38:58.005747  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:58.005757  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:58.005768  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:58.064186  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:58.064207  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:58.080907  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:58.080924  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:58.146147  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:58.137210   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.137944   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.139692   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.140402   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.141981   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:58.137210   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.137944   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.139692   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.140402   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.141981   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:58.146159  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:58.146170  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:58.214235  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:58.214253  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:00.744729  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:00.755028  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:00.755086  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:00.780193  359214 cri.go:89] found id: ""
	I1213 10:39:00.780207  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.780215  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:00.780221  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:00.780293  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:00.806094  359214 cri.go:89] found id: ""
	I1213 10:39:00.806109  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.806116  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:00.806123  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:00.806190  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:00.830215  359214 cri.go:89] found id: ""
	I1213 10:39:00.830229  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.830236  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:00.830241  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:00.830298  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:00.858553  359214 cri.go:89] found id: ""
	I1213 10:39:00.858567  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.858575  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:00.858581  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:00.858638  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:00.883276  359214 cri.go:89] found id: ""
	I1213 10:39:00.883290  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.883298  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:00.883304  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:00.883366  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:00.908199  359214 cri.go:89] found id: ""
	I1213 10:39:00.908214  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.908222  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:00.908235  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:00.908292  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:00.933487  359214 cri.go:89] found id: ""
	I1213 10:39:00.933502  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.933510  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:00.933518  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:00.933529  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:00.999819  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:00.990764   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.991604   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.993277   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.993599   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.995238   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:00.990764   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.991604   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.993277   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.993599   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.995238   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:00.999831  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:00.999851  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:01.070347  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:01.070376  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:01.099348  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:01.099367  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:01.160766  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:01.160789  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:03.683134  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:03.693419  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:03.693479  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:03.724358  359214 cri.go:89] found id: ""
	I1213 10:39:03.724373  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.724380  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:03.724386  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:03.724446  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:03.749342  359214 cri.go:89] found id: ""
	I1213 10:39:03.749357  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.749365  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:03.749370  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:03.749428  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:03.777066  359214 cri.go:89] found id: ""
	I1213 10:39:03.777081  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.777088  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:03.777094  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:03.777153  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:03.802375  359214 cri.go:89] found id: ""
	I1213 10:39:03.802390  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.802397  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:03.802405  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:03.802463  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:03.828597  359214 cri.go:89] found id: ""
	I1213 10:39:03.828613  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.828620  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:03.828626  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:03.828688  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:03.854166  359214 cri.go:89] found id: ""
	I1213 10:39:03.854187  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.854195  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:03.854201  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:03.854261  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:03.879516  359214 cri.go:89] found id: ""
	I1213 10:39:03.879533  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.879540  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:03.879549  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:03.879559  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:03.936679  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:03.936700  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:03.953300  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:03.953317  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:04.029874  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:04.020037   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.021068   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.022008   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.023857   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.024567   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:04.020037   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.021068   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.022008   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.023857   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.024567   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:04.029886  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:04.029896  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:04.097622  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:04.097643  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:06.630848  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:06.641568  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:06.641629  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:06.667996  359214 cri.go:89] found id: ""
	I1213 10:39:06.668011  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.668019  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:06.668024  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:06.668090  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:06.697263  359214 cri.go:89] found id: ""
	I1213 10:39:06.697278  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.697293  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:06.697299  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:06.697359  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:06.722757  359214 cri.go:89] found id: ""
	I1213 10:39:06.722772  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.722780  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:06.722785  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:06.722844  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:06.746758  359214 cri.go:89] found id: ""
	I1213 10:39:06.746772  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.746780  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:06.746786  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:06.746845  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:06.775078  359214 cri.go:89] found id: ""
	I1213 10:39:06.775093  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.775100  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:06.775105  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:06.775164  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:06.800898  359214 cri.go:89] found id: ""
	I1213 10:39:06.800914  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.800921  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:06.800926  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:06.800983  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:06.829594  359214 cri.go:89] found id: ""
	I1213 10:39:06.829624  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.829648  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:06.829656  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:06.829666  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:06.893293  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:06.893314  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:06.921544  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:06.921562  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:06.981949  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:06.981969  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:06.998794  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:06.998816  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:07.067966  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:07.059691   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.060374   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.061914   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.062229   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.063682   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:07.059691   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.060374   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.061914   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.062229   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.063682   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:09.568245  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:09.578515  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:09.578574  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:09.604486  359214 cri.go:89] found id: ""
	I1213 10:39:09.604500  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.604507  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:09.604512  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:09.604572  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:09.628878  359214 cri.go:89] found id: ""
	I1213 10:39:09.628894  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.628902  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:09.628912  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:09.628971  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:09.654182  359214 cri.go:89] found id: ""
	I1213 10:39:09.654196  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.654204  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:09.654209  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:09.654268  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:09.679850  359214 cri.go:89] found id: ""
	I1213 10:39:09.679864  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.679871  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:09.679877  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:09.679937  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:09.708630  359214 cri.go:89] found id: ""
	I1213 10:39:09.708644  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.708651  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:09.708657  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:09.708716  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:09.732554  359214 cri.go:89] found id: ""
	I1213 10:39:09.732568  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.732575  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:09.732581  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:09.732642  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:09.757631  359214 cri.go:89] found id: ""
	I1213 10:39:09.757646  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.757654  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:09.757663  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:09.757674  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:09.816181  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:09.816203  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:09.832514  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:09.832531  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:09.897359  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:09.888543   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.889254   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.891102   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.891693   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.893450   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:09.888543   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.889254   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.891102   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.891693   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.893450   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:09.897369  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:09.897379  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:09.960943  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:09.960964  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:12.490984  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:12.501823  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:12.501893  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:12.532332  359214 cri.go:89] found id: ""
	I1213 10:39:12.532347  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.532354  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:12.532359  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:12.532419  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:12.558457  359214 cri.go:89] found id: ""
	I1213 10:39:12.558471  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.558479  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:12.558485  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:12.558545  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:12.585075  359214 cri.go:89] found id: ""
	I1213 10:39:12.585089  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.585097  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:12.585102  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:12.585160  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:12.614401  359214 cri.go:89] found id: ""
	I1213 10:39:12.614415  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.614422  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:12.614428  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:12.614486  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:12.639152  359214 cri.go:89] found id: ""
	I1213 10:39:12.639166  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.639173  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:12.639179  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:12.639240  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:12.667593  359214 cri.go:89] found id: ""
	I1213 10:39:12.667607  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.667614  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:12.667620  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:12.667681  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:12.691984  359214 cri.go:89] found id: ""
	I1213 10:39:12.691997  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.692005  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:12.692013  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:12.692024  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:12.756546  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:12.748299   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.748690   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.750244   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.750570   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.752183   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:12.748299   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.748690   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.750244   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.750570   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.752183   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:12.756556  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:12.756567  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:12.820864  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:12.820885  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:12.853253  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:12.853289  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:12.911659  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:12.911678  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:15.427988  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:15.439459  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:15.439523  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:15.476834  359214 cri.go:89] found id: ""
	I1213 10:39:15.476849  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.476856  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:15.476862  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:15.476926  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:15.501586  359214 cri.go:89] found id: ""
	I1213 10:39:15.501601  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.501609  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:15.501614  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:15.501675  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:15.526367  359214 cri.go:89] found id: ""
	I1213 10:39:15.526381  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.526399  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:15.526406  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:15.526473  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:15.551126  359214 cri.go:89] found id: ""
	I1213 10:39:15.551141  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.551148  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:15.551154  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:15.551209  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:15.576958  359214 cri.go:89] found id: ""
	I1213 10:39:15.576973  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.576990  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:15.576996  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:15.577062  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:15.601287  359214 cri.go:89] found id: ""
	I1213 10:39:15.601300  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.601308  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:15.601313  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:15.601371  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:15.628822  359214 cri.go:89] found id: ""
	I1213 10:39:15.628837  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.628844  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:15.628852  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:15.628862  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:15.644985  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:15.645002  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:15.711548  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:15.703095   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.703681   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.705285   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.705963   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.707559   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:15.703095   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.703681   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.705285   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.705963   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.707559   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:15.711559  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:15.711571  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:15.775011  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:15.775031  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:15.802522  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:15.802545  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:18.359921  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:18.369925  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:18.369992  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:18.393448  359214 cri.go:89] found id: ""
	I1213 10:39:18.393462  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.393470  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:18.393476  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:18.393532  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:18.426863  359214 cri.go:89] found id: ""
	I1213 10:39:18.426876  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.426884  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:18.426889  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:18.426946  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:18.472251  359214 cri.go:89] found id: ""
	I1213 10:39:18.472264  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.472272  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:18.472277  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:18.472333  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:18.500412  359214 cri.go:89] found id: ""
	I1213 10:39:18.500427  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.500434  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:18.500440  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:18.500500  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:18.524823  359214 cri.go:89] found id: ""
	I1213 10:39:18.524837  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.524845  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:18.524850  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:18.524908  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:18.549332  359214 cri.go:89] found id: ""
	I1213 10:39:18.549346  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.549354  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:18.549359  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:18.549417  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:18.577251  359214 cri.go:89] found id: ""
	I1213 10:39:18.577271  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.577279  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:18.577287  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:18.577299  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:18.639510  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:18.639530  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:18.677762  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:18.677777  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:18.737061  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:18.737080  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:18.753422  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:18.753439  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:18.823128  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:18.814301   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.815633   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.816172   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.817539   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.818059   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:18.814301   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.815633   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.816172   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.817539   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.818059   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:21.323418  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:21.333772  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:21.333833  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:21.368103  359214 cri.go:89] found id: ""
	I1213 10:39:21.368118  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.368125  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:21.368131  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:21.368188  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:21.392848  359214 cri.go:89] found id: ""
	I1213 10:39:21.392862  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.392870  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:21.392875  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:21.392932  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:21.426067  359214 cri.go:89] found id: ""
	I1213 10:39:21.426082  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.426089  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:21.426094  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:21.426153  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:21.453497  359214 cri.go:89] found id: ""
	I1213 10:39:21.453521  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.453529  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:21.453535  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:21.453600  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:21.486155  359214 cri.go:89] found id: ""
	I1213 10:39:21.486170  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.486187  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:21.486193  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:21.486262  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:21.512133  359214 cri.go:89] found id: ""
	I1213 10:39:21.512148  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.512155  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:21.512161  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:21.512219  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:21.536909  359214 cri.go:89] found id: ""
	I1213 10:39:21.536925  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.536932  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:21.536940  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:21.536951  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:21.564635  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:21.564651  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:21.621861  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:21.621882  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:21.638280  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:21.638297  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:21.706649  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:21.698160   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.698774   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.700554   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.701257   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.702523   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:21.698160   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.698774   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.700554   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.701257   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.702523   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:21.706660  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:21.706678  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:24.270851  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:24.281891  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:24.281959  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:24.306887  359214 cri.go:89] found id: ""
	I1213 10:39:24.306902  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.306910  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:24.306916  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:24.306989  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:24.330995  359214 cri.go:89] found id: ""
	I1213 10:39:24.331009  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.331018  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:24.331023  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:24.331079  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:24.358824  359214 cri.go:89] found id: ""
	I1213 10:39:24.358838  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.358845  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:24.358850  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:24.358907  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:24.383545  359214 cri.go:89] found id: ""
	I1213 10:39:24.383559  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.383566  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:24.383572  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:24.383628  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:24.407288  359214 cri.go:89] found id: ""
	I1213 10:39:24.407302  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.407309  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:24.407315  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:24.407374  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:24.441689  359214 cri.go:89] found id: ""
	I1213 10:39:24.441703  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.441720  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:24.441727  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:24.441796  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:24.469372  359214 cri.go:89] found id: ""
	I1213 10:39:24.469387  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.469394  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:24.469402  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:24.469418  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:24.529071  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:24.529091  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:24.545770  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:24.545786  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:24.619385  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:24.610753   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.611526   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.613120   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.613552   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.615328   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:24.610753   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.611526   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.613120   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.613552   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.615328   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:24.619395  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:24.619406  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:24.683002  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:24.683029  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:27.214048  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:27.223825  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:27.223885  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:27.249091  359214 cri.go:89] found id: ""
	I1213 10:39:27.249106  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.249114  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:27.249120  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:27.249175  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:27.274216  359214 cri.go:89] found id: ""
	I1213 10:39:27.274231  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.274238  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:27.274243  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:27.274301  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:27.306051  359214 cri.go:89] found id: ""
	I1213 10:39:27.306068  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.306076  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:27.306081  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:27.306162  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:27.329993  359214 cri.go:89] found id: ""
	I1213 10:39:27.330015  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.330022  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:27.330027  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:27.330084  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:27.357738  359214 cri.go:89] found id: ""
	I1213 10:39:27.357759  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.357766  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:27.357772  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:27.357829  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:27.383932  359214 cri.go:89] found id: ""
	I1213 10:39:27.383948  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.383955  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:27.383960  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:27.384021  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:27.408273  359214 cri.go:89] found id: ""
	I1213 10:39:27.408298  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.408306  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:27.408314  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:27.408324  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:27.473400  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:27.473421  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:27.490562  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:27.490580  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:27.560540  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:27.551714   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.552445   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.554637   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.555366   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.556555   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:27.551714   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.552445   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.554637   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.555366   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.556555   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:27.560551  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:27.560562  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:27.623676  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:27.623700  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:30.153068  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:30.164672  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:30.164745  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:30.192223  359214 cri.go:89] found id: ""
	I1213 10:39:30.192239  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.192248  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:30.192254  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:30.192336  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:30.224222  359214 cri.go:89] found id: ""
	I1213 10:39:30.224237  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.224245  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:30.224251  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:30.224319  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:30.250132  359214 cri.go:89] found id: ""
	I1213 10:39:30.250148  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.250156  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:30.250161  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:30.250232  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:30.278166  359214 cri.go:89] found id: ""
	I1213 10:39:30.278182  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.278199  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:30.278205  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:30.278271  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:30.304028  359214 cri.go:89] found id: ""
	I1213 10:39:30.304043  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.304050  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:30.304055  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:30.304112  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:30.328660  359214 cri.go:89] found id: ""
	I1213 10:39:30.328675  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.328693  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:30.328699  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:30.328767  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:30.352850  359214 cri.go:89] found id: ""
	I1213 10:39:30.352865  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.352877  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:30.352886  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:30.352896  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:30.408893  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:30.408912  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:30.428762  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:30.428779  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:30.500428  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:30.492113   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.492871   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.494609   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.495292   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.496285   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:30.492113   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.492871   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.494609   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.495292   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.496285   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:30.500438  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:30.500449  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:30.563541  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:30.563560  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:33.092955  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:33.103393  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:33.103457  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:33.128626  359214 cri.go:89] found id: ""
	I1213 10:39:33.128640  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.128647  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:33.128653  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:33.128709  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:33.156533  359214 cri.go:89] found id: ""
	I1213 10:39:33.156548  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.156555  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:33.156561  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:33.156631  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:33.181965  359214 cri.go:89] found id: ""
	I1213 10:39:33.181979  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.181987  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:33.181992  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:33.182066  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:33.210753  359214 cri.go:89] found id: ""
	I1213 10:39:33.210767  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.210775  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:33.210780  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:33.210846  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:33.236369  359214 cri.go:89] found id: ""
	I1213 10:39:33.236384  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.236391  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:33.236396  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:33.236453  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:33.261374  359214 cri.go:89] found id: ""
	I1213 10:39:33.261390  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.261397  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:33.261403  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:33.261476  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:33.286480  359214 cri.go:89] found id: ""
	I1213 10:39:33.286496  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.286512  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:33.286536  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:33.286547  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:33.344247  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:33.344268  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:33.362163  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:33.362178  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:33.431331  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:33.423097   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.423938   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.425571   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.425890   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.427375   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:33.423097   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.423938   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.425571   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.425890   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.427375   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:33.431340  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:33.431351  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:33.514221  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:33.514250  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:36.043055  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:36.053301  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:36.053366  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:36.078047  359214 cri.go:89] found id: ""
	I1213 10:39:36.078061  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.078069  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:36.078074  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:36.078135  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:36.104994  359214 cri.go:89] found id: ""
	I1213 10:39:36.105009  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.105017  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:36.105022  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:36.105083  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:36.138243  359214 cri.go:89] found id: ""
	I1213 10:39:36.138257  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.138264  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:36.138270  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:36.138331  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:36.163657  359214 cri.go:89] found id: ""
	I1213 10:39:36.163672  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.163679  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:36.163685  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:36.163744  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:36.192631  359214 cri.go:89] found id: ""
	I1213 10:39:36.192646  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.192653  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:36.192658  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:36.192715  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:36.217613  359214 cri.go:89] found id: ""
	I1213 10:39:36.217626  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.217634  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:36.217641  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:36.217699  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:36.242973  359214 cri.go:89] found id: ""
	I1213 10:39:36.242988  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.242995  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:36.243004  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:36.243015  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:36.299822  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:36.299843  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:36.316930  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:36.316947  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:36.384839  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:36.376386   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.377339   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.379075   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.379670   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.381017   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:36.376386   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.377339   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.379075   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.379670   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.381017   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:36.384850  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:36.384860  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:36.453800  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:36.453820  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:38.992805  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:39.004323  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:39.004395  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:39.029542  359214 cri.go:89] found id: ""
	I1213 10:39:39.029556  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.029564  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:39.029569  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:39.029634  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:39.058191  359214 cri.go:89] found id: ""
	I1213 10:39:39.058205  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.058212  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:39.058217  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:39.058278  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:39.082506  359214 cri.go:89] found id: ""
	I1213 10:39:39.082520  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.082527  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:39.082532  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:39.082588  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:39.107708  359214 cri.go:89] found id: ""
	I1213 10:39:39.107722  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.107729  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:39.107735  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:39.107795  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:39.134092  359214 cri.go:89] found id: ""
	I1213 10:39:39.134106  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.134114  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:39.134119  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:39.134176  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:39.159493  359214 cri.go:89] found id: ""
	I1213 10:39:39.159508  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.159516  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:39.159521  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:39.159586  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:39.185250  359214 cri.go:89] found id: ""
	I1213 10:39:39.185270  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.185278  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:39.185285  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:39.185296  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:39.212945  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:39.212964  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:39.270421  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:39.270441  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:39.287465  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:39.287483  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:39.353697  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:39.344780   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.345537   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.347125   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.347630   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.349255   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:39.344780   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.345537   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.347125   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.347630   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.349255   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:39.353707  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:39.353719  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:41.923052  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:41.933314  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:41.933380  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:41.957979  359214 cri.go:89] found id: ""
	I1213 10:39:41.957994  359214 logs.go:282] 0 containers: []
	W1213 10:39:41.958001  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:41.958006  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:41.958063  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:41.982504  359214 cri.go:89] found id: ""
	I1213 10:39:41.982519  359214 logs.go:282] 0 containers: []
	W1213 10:39:41.982527  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:41.982532  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:41.982594  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:42.034066  359214 cri.go:89] found id: ""
	I1213 10:39:42.034090  359214 logs.go:282] 0 containers: []
	W1213 10:39:42.034098  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:42.034103  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:42.034170  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:42.060660  359214 cri.go:89] found id: ""
	I1213 10:39:42.060675  359214 logs.go:282] 0 containers: []
	W1213 10:39:42.060682  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:42.060688  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:42.060760  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:42.089100  359214 cri.go:89] found id: ""
	I1213 10:39:42.089116  359214 logs.go:282] 0 containers: []
	W1213 10:39:42.089125  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:42.089131  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:42.089206  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:42.124357  359214 cri.go:89] found id: ""
	I1213 10:39:42.124373  359214 logs.go:282] 0 containers: []
	W1213 10:39:42.124382  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:42.124388  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:42.124457  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:42.154537  359214 cri.go:89] found id: ""
	I1213 10:39:42.154552  359214 logs.go:282] 0 containers: []
	W1213 10:39:42.154560  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:42.154568  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:42.154580  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:42.236098  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:42.226374   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.227371   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.229046   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.229696   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.231326   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:42.226374   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.227371   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.229046   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.229696   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.231326   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:42.236116  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:42.236128  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:42.301179  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:42.301201  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:42.331860  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:42.331876  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:42.389580  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:42.389599  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:44.907943  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:44.917971  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:44.918030  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:44.944860  359214 cri.go:89] found id: ""
	I1213 10:39:44.944876  359214 logs.go:282] 0 containers: []
	W1213 10:39:44.944883  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:44.944889  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:44.944947  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:44.969171  359214 cri.go:89] found id: ""
	I1213 10:39:44.969185  359214 logs.go:282] 0 containers: []
	W1213 10:39:44.969192  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:44.969197  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:44.969274  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:44.993953  359214 cri.go:89] found id: ""
	I1213 10:39:44.993968  359214 logs.go:282] 0 containers: []
	W1213 10:39:44.993975  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:44.993980  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:44.994036  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:45.047270  359214 cri.go:89] found id: ""
	I1213 10:39:45.047286  359214 logs.go:282] 0 containers: []
	W1213 10:39:45.047295  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:45.047308  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:45.047383  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:45.081157  359214 cri.go:89] found id: ""
	I1213 10:39:45.081173  359214 logs.go:282] 0 containers: []
	W1213 10:39:45.081182  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:45.081189  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:45.081275  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:45.121621  359214 cri.go:89] found id: ""
	I1213 10:39:45.121638  359214 logs.go:282] 0 containers: []
	W1213 10:39:45.121646  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:45.121652  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:45.121723  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:45.178070  359214 cri.go:89] found id: ""
	I1213 10:39:45.178087  359214 logs.go:282] 0 containers: []
	W1213 10:39:45.178095  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:45.178105  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:45.178117  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:45.242653  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:45.242715  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:45.312989  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:45.313030  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:45.333875  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:45.333893  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:45.402702  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:45.394811   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.395310   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.396989   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.397342   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.398810   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:45.394811   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.395310   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.396989   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.397342   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.398810   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:45.402713  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:45.402724  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:47.974092  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:47.984508  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:47.984581  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:48.011411  359214 cri.go:89] found id: ""
	I1213 10:39:48.011427  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.011434  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:48.011440  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:48.011500  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:48.037430  359214 cri.go:89] found id: ""
	I1213 10:39:48.037445  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.037464  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:48.037470  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:48.037541  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:48.068968  359214 cri.go:89] found id: ""
	I1213 10:39:48.068982  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.068989  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:48.068994  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:48.069053  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:48.093935  359214 cri.go:89] found id: ""
	I1213 10:39:48.093949  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.093966  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:48.093982  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:48.094054  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:48.118617  359214 cri.go:89] found id: ""
	I1213 10:39:48.118631  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.118647  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:48.118653  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:48.118742  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:48.147778  359214 cri.go:89] found id: ""
	I1213 10:39:48.147792  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.147802  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:48.147807  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:48.147866  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:48.171531  359214 cri.go:89] found id: ""
	I1213 10:39:48.171546  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.171553  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:48.171562  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:48.171572  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:48.228511  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:48.228531  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:48.244723  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:48.244738  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:48.313285  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:48.305099   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.305923   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.307541   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.308109   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.309223   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:48.305099   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.305923   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.307541   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.308109   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.309223   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:48.313296  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:48.313307  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:48.374383  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:48.374405  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:50.902721  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:50.912675  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:50.912735  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:50.936964  359214 cri.go:89] found id: ""
	I1213 10:39:50.936978  359214 logs.go:282] 0 containers: []
	W1213 10:39:50.936986  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:50.936991  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:50.937050  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:50.960978  359214 cri.go:89] found id: ""
	I1213 10:39:50.960991  359214 logs.go:282] 0 containers: []
	W1213 10:39:50.960999  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:50.961004  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:50.961060  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:50.985441  359214 cri.go:89] found id: ""
	I1213 10:39:50.985455  359214 logs.go:282] 0 containers: []
	W1213 10:39:50.985462  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:50.985467  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:50.985524  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:51.012305  359214 cri.go:89] found id: ""
	I1213 10:39:51.012320  359214 logs.go:282] 0 containers: []
	W1213 10:39:51.012327  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:51.012333  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:51.012394  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:51.037844  359214 cri.go:89] found id: ""
	I1213 10:39:51.037858  359214 logs.go:282] 0 containers: []
	W1213 10:39:51.037865  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:51.037871  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:51.037930  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:51.062094  359214 cri.go:89] found id: ""
	I1213 10:39:51.062108  359214 logs.go:282] 0 containers: []
	W1213 10:39:51.062115  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:51.062121  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:51.062178  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:51.087816  359214 cri.go:89] found id: ""
	I1213 10:39:51.087831  359214 logs.go:282] 0 containers: []
	W1213 10:39:51.087839  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:51.087848  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:51.087860  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:51.144441  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:51.144462  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:51.161532  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:51.161551  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:51.232639  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:51.224130   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.224905   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.226632   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.227272   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.228817   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:51.224130   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.224905   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.226632   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.227272   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.228817   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:51.232650  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:51.232662  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:51.300854  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:51.300877  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:53.830183  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:53.840765  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:53.840829  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:53.867482  359214 cri.go:89] found id: ""
	I1213 10:39:53.867497  359214 logs.go:282] 0 containers: []
	W1213 10:39:53.867504  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:53.867510  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:53.867572  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:53.896830  359214 cri.go:89] found id: ""
	I1213 10:39:53.896844  359214 logs.go:282] 0 containers: []
	W1213 10:39:53.896852  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:53.896857  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:53.896921  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:53.921163  359214 cri.go:89] found id: ""
	I1213 10:39:53.921177  359214 logs.go:282] 0 containers: []
	W1213 10:39:53.921185  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:53.921190  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:53.921247  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:53.947006  359214 cri.go:89] found id: ""
	I1213 10:39:53.947020  359214 logs.go:282] 0 containers: []
	W1213 10:39:53.947027  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:53.947033  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:53.947089  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:53.971965  359214 cri.go:89] found id: ""
	I1213 10:39:53.971979  359214 logs.go:282] 0 containers: []
	W1213 10:39:53.971986  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:53.971992  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:53.972050  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:53.996770  359214 cri.go:89] found id: ""
	I1213 10:39:53.996785  359214 logs.go:282] 0 containers: []
	W1213 10:39:53.996792  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:53.996797  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:53.996856  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:54.029511  359214 cri.go:89] found id: ""
	I1213 10:39:54.029526  359214 logs.go:282] 0 containers: []
	W1213 10:39:54.029534  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:54.029542  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:54.029553  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:54.063523  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:54.063540  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:54.120600  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:54.120624  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:54.136821  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:54.136839  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:54.210067  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:54.201894   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.202593   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.204122   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.204658   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.206168   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:54.201894   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.202593   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.204122   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.204658   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.206168   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:54.210077  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:54.210087  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:56.773483  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:56.783689  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:56.783766  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:56.808277  359214 cri.go:89] found id: ""
	I1213 10:39:56.808291  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.808299  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:56.808304  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:56.808368  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:56.832949  359214 cri.go:89] found id: ""
	I1213 10:39:56.832963  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.832970  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:56.832976  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:56.833036  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:56.858222  359214 cri.go:89] found id: ""
	I1213 10:39:56.858236  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.858250  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:56.858255  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:56.858313  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:56.886516  359214 cri.go:89] found id: ""
	I1213 10:39:56.886531  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.886538  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:56.886543  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:56.886599  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:56.916534  359214 cri.go:89] found id: ""
	I1213 10:39:56.916548  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.916554  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:56.916560  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:56.916620  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:56.941364  359214 cri.go:89] found id: ""
	I1213 10:39:56.941379  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.941391  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:56.941397  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:56.941458  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:56.965977  359214 cri.go:89] found id: ""
	I1213 10:39:56.965991  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.965998  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:56.966006  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:56.966017  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:57.022046  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:57.022066  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:57.038754  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:57.038773  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:57.104023  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:57.095403   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.096172   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.097756   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.098390   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.100006   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:57.095403   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.096172   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.097756   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.098390   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.100006   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:57.104033  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:57.104043  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:57.164889  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:57.164909  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:59.697427  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:59.709225  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:59.709293  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:59.736814  359214 cri.go:89] found id: ""
	I1213 10:39:59.736828  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.736835  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:59.736840  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:59.736897  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:59.765228  359214 cri.go:89] found id: ""
	I1213 10:39:59.765243  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.765250  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:59.765255  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:59.765321  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:59.790792  359214 cri.go:89] found id: ""
	I1213 10:39:59.790807  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.790814  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:59.790819  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:59.790877  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:59.817123  359214 cri.go:89] found id: ""
	I1213 10:39:59.817137  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.817149  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:59.817161  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:59.817225  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:59.842465  359214 cri.go:89] found id: ""
	I1213 10:39:59.842480  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.842488  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:59.842493  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:59.842557  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:59.871828  359214 cri.go:89] found id: ""
	I1213 10:39:59.871842  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.871859  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:59.871865  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:59.871921  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:59.895975  359214 cri.go:89] found id: ""
	I1213 10:39:59.895989  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.895996  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:59.896004  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:59.896014  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:59.953038  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:59.953058  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:59.970121  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:59.970140  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:00.112897  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:00.082810   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.086414   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.089161   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.089674   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.099187   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:00.082810   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.086414   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.089161   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.089674   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.099187   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:00.112910  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:00.112922  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:00.251770  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:00.251795  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:02.813529  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:02.825083  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:02.825143  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:02.849893  359214 cri.go:89] found id: ""
	I1213 10:40:02.849907  359214 logs.go:282] 0 containers: []
	W1213 10:40:02.849915  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:02.849920  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:02.849979  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:02.876288  359214 cri.go:89] found id: ""
	I1213 10:40:02.876303  359214 logs.go:282] 0 containers: []
	W1213 10:40:02.876311  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:02.876316  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:02.876376  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:02.900996  359214 cri.go:89] found id: ""
	I1213 10:40:02.901011  359214 logs.go:282] 0 containers: []
	W1213 10:40:02.901018  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:02.901023  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:02.901085  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:02.941121  359214 cri.go:89] found id: ""
	I1213 10:40:02.941135  359214 logs.go:282] 0 containers: []
	W1213 10:40:02.941142  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:02.941148  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:02.941212  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:02.977122  359214 cri.go:89] found id: ""
	I1213 10:40:02.977137  359214 logs.go:282] 0 containers: []
	W1213 10:40:02.977145  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:02.977151  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:02.977211  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:03.007614  359214 cri.go:89] found id: ""
	I1213 10:40:03.007631  359214 logs.go:282] 0 containers: []
	W1213 10:40:03.007638  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:03.007644  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:03.007712  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:03.035112  359214 cri.go:89] found id: ""
	I1213 10:40:03.035128  359214 logs.go:282] 0 containers: []
	W1213 10:40:03.035135  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:03.035143  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:03.035153  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:03.092346  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:03.092365  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:03.109513  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:03.109531  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:03.178080  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:03.169681   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.170216   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.171843   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.172389   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.174013   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:03.169681   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.170216   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.171843   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.172389   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.174013   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:03.178092  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:03.178103  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:03.240824  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:03.240843  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:05.775438  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:05.785647  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:05.785707  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:05.809484  359214 cri.go:89] found id: ""
	I1213 10:40:05.809497  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.809505  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:05.809510  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:05.809569  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:05.834754  359214 cri.go:89] found id: ""
	I1213 10:40:05.834769  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.834777  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:05.834782  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:05.834844  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:05.858984  359214 cri.go:89] found id: ""
	I1213 10:40:05.858999  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.859006  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:05.859011  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:05.859072  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:05.884414  359214 cri.go:89] found id: ""
	I1213 10:40:05.884429  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.884436  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:05.884442  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:05.884504  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:05.918776  359214 cri.go:89] found id: ""
	I1213 10:40:05.918799  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.918807  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:05.918812  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:05.918880  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:05.963307  359214 cri.go:89] found id: ""
	I1213 10:40:05.963331  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.963340  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:05.963346  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:05.963414  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:05.989236  359214 cri.go:89] found id: ""
	I1213 10:40:05.989252  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.989260  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:05.989274  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:05.989284  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:06.046789  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:06.046809  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:06.063391  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:06.063408  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:06.133569  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:06.125185   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.125769   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.127395   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.128000   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.129671   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:06.125185   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.125769   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.127395   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.128000   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.129671   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:06.133579  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:06.133590  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:06.199358  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:06.199385  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:08.731038  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:08.741608  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:08.741668  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:08.770775  359214 cri.go:89] found id: ""
	I1213 10:40:08.770798  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.770806  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:08.770812  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:08.770880  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:08.795812  359214 cri.go:89] found id: ""
	I1213 10:40:08.795826  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.795834  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:08.795839  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:08.795900  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:08.821389  359214 cri.go:89] found id: ""
	I1213 10:40:08.821405  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.821415  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:08.821420  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:08.821484  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:08.847242  359214 cri.go:89] found id: ""
	I1213 10:40:08.847256  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.847265  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:08.847271  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:08.847337  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:08.873913  359214 cri.go:89] found id: ""
	I1213 10:40:08.873927  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.873935  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:08.873940  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:08.874003  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:08.898969  359214 cri.go:89] found id: ""
	I1213 10:40:08.898983  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.898990  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:08.898997  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:08.899063  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:08.936984  359214 cri.go:89] found id: ""
	I1213 10:40:08.936999  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.937006  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:08.937015  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:08.937026  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:09.003459  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:09.003483  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:09.022648  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:09.022673  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:09.089911  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:09.081728   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.082500   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.083990   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.084516   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.086022   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:09.081728   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.082500   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.083990   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.084516   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.086022   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:09.089922  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:09.089934  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:09.152235  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:09.152255  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:11.681167  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:11.691399  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:11.691463  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:11.720896  359214 cri.go:89] found id: ""
	I1213 10:40:11.720910  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.720918  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:11.720924  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:11.720987  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:11.746089  359214 cri.go:89] found id: ""
	I1213 10:40:11.746103  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.746111  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:11.746117  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:11.746176  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:11.770642  359214 cri.go:89] found id: ""
	I1213 10:40:11.770657  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.770664  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:11.770670  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:11.770759  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:11.798877  359214 cri.go:89] found id: ""
	I1213 10:40:11.798891  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.798900  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:11.798905  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:11.798965  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:11.824512  359214 cri.go:89] found id: ""
	I1213 10:40:11.824526  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.824534  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:11.824539  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:11.824596  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:11.849644  359214 cri.go:89] found id: ""
	I1213 10:40:11.849658  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.849665  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:11.849671  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:11.849728  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:11.878171  359214 cri.go:89] found id: ""
	I1213 10:40:11.878185  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.878192  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:11.878201  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:11.878213  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:11.942012  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:11.942033  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:11.973830  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:11.973849  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:12.038115  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:12.038135  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:12.055328  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:12.055345  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:12.122312  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:12.113825   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.114885   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.116494   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.116834   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.118378   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:12.113825   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.114885   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.116494   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.116834   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.118378   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:14.622545  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:14.632872  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:14.632931  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:14.660285  359214 cri.go:89] found id: ""
	I1213 10:40:14.660300  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.660308  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:14.660313  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:14.660370  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:14.686341  359214 cri.go:89] found id: ""
	I1213 10:40:14.686355  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.686362  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:14.686368  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:14.686427  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:14.710306  359214 cri.go:89] found id: ""
	I1213 10:40:14.710321  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.710328  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:14.710334  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:14.710392  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:14.736823  359214 cri.go:89] found id: ""
	I1213 10:40:14.736838  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.736846  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:14.736851  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:14.736909  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:14.761623  359214 cri.go:89] found id: ""
	I1213 10:40:14.761638  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.761645  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:14.761651  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:14.761710  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:14.786707  359214 cri.go:89] found id: ""
	I1213 10:40:14.786721  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.786729  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:14.786734  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:14.786795  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:14.816346  359214 cri.go:89] found id: ""
	I1213 10:40:14.816361  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.816368  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:14.816376  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:14.816386  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:14.877767  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:14.877786  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:14.914260  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:14.914277  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:14.980282  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:14.980303  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:14.996741  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:14.996760  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:15.099242  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:15.090567   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.091275   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.092910   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.093493   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.095098   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:15.090567   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.091275   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.092910   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.093493   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.095098   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:17.600882  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:17.611377  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:17.611437  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:17.639825  359214 cri.go:89] found id: ""
	I1213 10:40:17.639840  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.639847  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:17.639853  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:17.639912  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:17.664963  359214 cri.go:89] found id: ""
	I1213 10:40:17.664977  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.664985  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:17.664990  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:17.665052  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:17.690137  359214 cri.go:89] found id: ""
	I1213 10:40:17.690152  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.690159  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:17.690165  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:17.690230  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:17.715292  359214 cri.go:89] found id: ""
	I1213 10:40:17.715307  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.715315  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:17.715320  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:17.715382  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:17.744729  359214 cri.go:89] found id: ""
	I1213 10:40:17.744743  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.744750  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:17.744756  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:17.744815  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:17.772253  359214 cri.go:89] found id: ""
	I1213 10:40:17.772268  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.772276  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:17.772282  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:17.772348  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:17.797214  359214 cri.go:89] found id: ""
	I1213 10:40:17.797229  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.797237  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:17.797245  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:17.797255  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:17.852633  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:17.852653  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:17.869612  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:17.869633  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:17.936787  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:17.927568   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.928465   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.930186   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.930475   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.932615   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:17.927568   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.928465   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.930186   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.930475   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.932615   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:17.936804  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:17.936815  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:18.005630  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:18.005656  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:20.537348  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:20.547703  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:20.547778  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:20.572977  359214 cri.go:89] found id: ""
	I1213 10:40:20.572991  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.572998  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:20.573004  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:20.573062  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:20.602314  359214 cri.go:89] found id: ""
	I1213 10:40:20.602328  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.602335  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:20.602341  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:20.602397  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:20.627655  359214 cri.go:89] found id: ""
	I1213 10:40:20.627669  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.627686  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:20.627698  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:20.627767  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:20.655199  359214 cri.go:89] found id: ""
	I1213 10:40:20.655213  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.655220  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:20.655226  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:20.655291  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:20.682083  359214 cri.go:89] found id: ""
	I1213 10:40:20.682107  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.682115  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:20.682120  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:20.682189  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:20.707128  359214 cri.go:89] found id: ""
	I1213 10:40:20.707142  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.707150  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:20.707155  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:20.707213  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:20.732071  359214 cri.go:89] found id: ""
	I1213 10:40:20.732087  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.732094  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:20.732103  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:20.732112  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:20.797387  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:20.788274   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.789028   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.791053   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.791612   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.793250   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:20.788274   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.789028   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.791053   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.791612   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.793250   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:20.797397  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:20.797410  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:20.859451  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:20.859471  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:20.892801  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:20.892820  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:20.958351  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:20.958371  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:23.480839  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:23.491926  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:23.491987  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:23.518294  359214 cri.go:89] found id: ""
	I1213 10:40:23.518309  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.518317  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:23.518324  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:23.518385  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:23.545487  359214 cri.go:89] found id: ""
	I1213 10:40:23.545502  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.545509  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:23.545514  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:23.545584  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:23.571990  359214 cri.go:89] found id: ""
	I1213 10:40:23.572004  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.572012  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:23.572017  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:23.572080  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:23.599133  359214 cri.go:89] found id: ""
	I1213 10:40:23.599149  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.599157  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:23.599163  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:23.599223  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:23.626203  359214 cri.go:89] found id: ""
	I1213 10:40:23.626217  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.626225  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:23.626232  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:23.626296  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:23.653325  359214 cri.go:89] found id: ""
	I1213 10:40:23.653341  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.653349  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:23.653354  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:23.653423  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:23.688100  359214 cri.go:89] found id: ""
	I1213 10:40:23.688115  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.688123  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:23.688132  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:23.688141  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:23.750798  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:23.750818  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:23.781668  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:23.781685  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:23.839211  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:23.839231  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:23.856390  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:23.856414  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:23.924021  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:23.914017   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.914911   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.915850   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.917610   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.918368   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:23.914017   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.914911   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.915850   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.917610   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.918368   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:26.424278  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:26.434304  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:26.434366  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:26.460634  359214 cri.go:89] found id: ""
	I1213 10:40:26.460649  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.460657  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:26.460663  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:26.460723  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:26.485153  359214 cri.go:89] found id: ""
	I1213 10:40:26.485167  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.485175  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:26.485180  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:26.485238  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:26.514602  359214 cri.go:89] found id: ""
	I1213 10:40:26.514617  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.514624  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:26.514630  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:26.514715  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:26.539399  359214 cri.go:89] found id: ""
	I1213 10:40:26.539415  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.539422  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:26.539427  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:26.539489  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:26.564066  359214 cri.go:89] found id: ""
	I1213 10:40:26.564081  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.564088  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:26.564094  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:26.564158  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:26.595722  359214 cri.go:89] found id: ""
	I1213 10:40:26.595736  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.595744  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:26.595749  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:26.595808  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:26.621852  359214 cri.go:89] found id: ""
	I1213 10:40:26.621867  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.621875  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:26.621884  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:26.621894  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:26.678226  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:26.678245  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:26.694679  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:26.694762  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:26.760593  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:26.751702   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.752418   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.754240   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.754904   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.756624   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:26.751702   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.752418   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.754240   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.754904   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.756624   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:26.760604  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:26.760615  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:26.826139  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:26.826161  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:29.354247  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:29.364778  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:29.364838  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:29.391976  359214 cri.go:89] found id: ""
	I1213 10:40:29.391992  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.391999  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:29.392006  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:29.392065  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:29.420898  359214 cri.go:89] found id: ""
	I1213 10:40:29.420913  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.420920  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:29.420926  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:29.420995  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:29.445579  359214 cri.go:89] found id: ""
	I1213 10:40:29.445593  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.445601  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:29.445606  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:29.445669  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:29.470481  359214 cri.go:89] found id: ""
	I1213 10:40:29.470496  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.470504  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:29.470510  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:29.470571  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:29.494582  359214 cri.go:89] found id: ""
	I1213 10:40:29.494597  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.494605  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:29.494612  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:29.494672  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:29.520784  359214 cri.go:89] found id: ""
	I1213 10:40:29.520801  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.520810  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:29.520816  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:29.520879  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:29.546369  359214 cri.go:89] found id: ""
	I1213 10:40:29.546383  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.546390  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:29.546398  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:29.546410  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:29.607363  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:29.607383  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:29.641550  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:29.641568  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:29.700639  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:29.700662  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:29.717135  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:29.717152  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:29.786035  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:29.777828   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.778659   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.780297   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.780629   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.782173   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:29.777828   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.778659   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.780297   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.780629   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.782173   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:32.286874  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:32.297433  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:32.297493  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:32.326086  359214 cri.go:89] found id: ""
	I1213 10:40:32.326102  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.326109  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:32.326116  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:32.326172  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:32.359076  359214 cri.go:89] found id: ""
	I1213 10:40:32.359091  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.359098  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:32.359104  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:32.359170  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:32.384522  359214 cri.go:89] found id: ""
	I1213 10:40:32.384536  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.384544  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:32.384560  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:32.384659  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:32.410250  359214 cri.go:89] found id: ""
	I1213 10:40:32.410264  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.410272  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:32.410285  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:32.410348  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:32.435630  359214 cri.go:89] found id: ""
	I1213 10:40:32.435644  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.435651  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:32.435656  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:32.435714  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:32.463149  359214 cri.go:89] found id: ""
	I1213 10:40:32.463163  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.463171  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:32.463176  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:32.463242  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:32.487678  359214 cri.go:89] found id: ""
	I1213 10:40:32.487692  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.487700  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:32.487707  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:32.487716  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:32.550022  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:32.550044  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:32.583548  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:32.583564  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:32.640719  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:32.640741  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:32.658578  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:32.658596  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:32.723797  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:32.714586   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.715311   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.716834   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.717289   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.719662   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:32.714586   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.715311   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.716834   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.717289   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.719662   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:35.224914  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:35.236872  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:35.237012  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:35.268051  359214 cri.go:89] found id: ""
	I1213 10:40:35.268066  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.268073  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:35.268080  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:35.268145  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:35.295044  359214 cri.go:89] found id: ""
	I1213 10:40:35.295059  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.295068  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:35.295075  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:35.295135  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:35.325621  359214 cri.go:89] found id: ""
	I1213 10:40:35.325634  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.325642  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:35.325647  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:35.325710  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:35.351145  359214 cri.go:89] found id: ""
	I1213 10:40:35.351160  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.351168  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:35.351173  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:35.351232  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:35.376062  359214 cri.go:89] found id: ""
	I1213 10:40:35.376076  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.376083  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:35.376089  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:35.376145  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:35.400598  359214 cri.go:89] found id: ""
	I1213 10:40:35.400612  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.400619  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:35.400631  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:35.400688  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:35.425347  359214 cri.go:89] found id: ""
	I1213 10:40:35.425361  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.425368  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:35.425376  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:35.425387  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:35.487139  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:35.487160  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:35.514527  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:35.514544  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:35.571469  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:35.571489  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:35.590017  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:35.590034  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:35.658284  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:35.648682   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.650020   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.650936   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.652639   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.653357   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:35.648682   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.650020   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.650936   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.652639   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.653357   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:38.158809  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:38.173580  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:38.173664  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:38.205099  359214 cri.go:89] found id: ""
	I1213 10:40:38.205115  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.205122  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:38.205128  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:38.205185  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:38.230418  359214 cri.go:89] found id: ""
	I1213 10:40:38.230432  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.230439  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:38.230445  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:38.230503  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:38.255657  359214 cri.go:89] found id: ""
	I1213 10:40:38.255671  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.255679  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:38.255684  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:38.255743  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:38.284257  359214 cri.go:89] found id: ""
	I1213 10:40:38.284271  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.284279  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:38.284285  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:38.284343  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:38.310187  359214 cri.go:89] found id: ""
	I1213 10:40:38.310202  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.310209  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:38.310214  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:38.310272  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:38.334855  359214 cri.go:89] found id: ""
	I1213 10:40:38.334870  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.334878  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:38.334883  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:38.334943  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:38.364073  359214 cri.go:89] found id: ""
	I1213 10:40:38.364087  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.364095  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:38.364103  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:38.364114  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:38.380615  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:38.380633  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:38.445151  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:38.436629   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.437359   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.439007   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.439526   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.441205   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:38.436629   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.437359   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.439007   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.439526   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.441205   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:38.445161  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:38.445171  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:38.508000  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:38.508024  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:38.536010  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:38.536028  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:41.097145  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:41.107492  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:41.107560  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:41.133151  359214 cri.go:89] found id: ""
	I1213 10:40:41.133165  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.133173  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:41.133178  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:41.133239  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:41.158807  359214 cri.go:89] found id: ""
	I1213 10:40:41.158822  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.158830  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:41.158835  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:41.158900  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:41.186344  359214 cri.go:89] found id: ""
	I1213 10:40:41.186358  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.186366  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:41.186371  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:41.186432  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:41.212889  359214 cri.go:89] found id: ""
	I1213 10:40:41.212904  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.212911  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:41.212917  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:41.212976  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:41.238414  359214 cri.go:89] found id: ""
	I1213 10:40:41.238429  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.238437  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:41.238442  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:41.238509  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:41.265200  359214 cri.go:89] found id: ""
	I1213 10:40:41.265215  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.265222  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:41.265228  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:41.265299  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:41.293447  359214 cri.go:89] found id: ""
	I1213 10:40:41.293465  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.293473  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:41.293483  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:41.293539  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:41.357277  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:41.348095   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.348933   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.350453   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.350904   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.352722   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:41.348095   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.348933   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.350453   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.350904   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.352722   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:41.357289  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:41.357299  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:41.419746  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:41.419767  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:41.447382  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:41.447400  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:41.502410  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:41.502430  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:44.019462  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:44.030131  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:44.030195  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:44.063076  359214 cri.go:89] found id: ""
	I1213 10:40:44.063093  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.063102  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:44.063107  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:44.063171  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:44.087990  359214 cri.go:89] found id: ""
	I1213 10:40:44.088005  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.088012  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:44.088017  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:44.088077  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:44.116967  359214 cri.go:89] found id: ""
	I1213 10:40:44.116982  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.117000  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:44.117006  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:44.117075  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:44.144381  359214 cri.go:89] found id: ""
	I1213 10:40:44.144395  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.144403  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:44.144414  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:44.144475  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:44.176265  359214 cri.go:89] found id: ""
	I1213 10:40:44.176279  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.176286  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:44.176291  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:44.176349  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:44.204075  359214 cri.go:89] found id: ""
	I1213 10:40:44.204090  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.204097  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:44.204102  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:44.204159  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:44.235147  359214 cri.go:89] found id: ""
	I1213 10:40:44.235161  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.235169  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:44.235177  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:44.235187  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:44.290923  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:44.290942  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:44.307381  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:44.307398  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:44.371069  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:44.362628   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.363314   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.365045   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.365643   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.367260   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:44.362628   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.363314   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.365045   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.365643   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.367260   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:44.371080  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:44.371092  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:44.432736  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:44.432757  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:46.966048  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:46.976554  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:46.976616  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:47.009823  359214 cri.go:89] found id: ""
	I1213 10:40:47.009837  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.009845  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:47.009850  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:47.009912  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:47.035213  359214 cri.go:89] found id: ""
	I1213 10:40:47.035227  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.035234  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:47.035239  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:47.035300  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:47.060442  359214 cri.go:89] found id: ""
	I1213 10:40:47.060457  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.060465  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:47.060470  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:47.060527  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:47.084361  359214 cri.go:89] found id: ""
	I1213 10:40:47.084375  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.084383  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:47.084389  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:47.084453  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:47.109828  359214 cri.go:89] found id: ""
	I1213 10:40:47.109843  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.109850  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:47.109856  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:47.109920  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:47.138538  359214 cri.go:89] found id: ""
	I1213 10:40:47.138553  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.138561  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:47.138566  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:47.138623  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:47.173086  359214 cri.go:89] found id: ""
	I1213 10:40:47.173101  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.173108  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:47.173116  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:47.173125  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:47.230267  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:47.230285  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:47.247567  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:47.247584  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:47.313118  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:47.305055   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.305868   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.307513   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.307952   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.309445   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:47.305055   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.305868   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.307513   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.307952   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.309445   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:47.313128  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:47.313140  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:47.379486  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:47.379507  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:49.911610  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:49.921678  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:49.921738  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:49.945802  359214 cri.go:89] found id: ""
	I1213 10:40:49.945815  359214 logs.go:282] 0 containers: []
	W1213 10:40:49.945823  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:49.945828  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:49.945884  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:49.972021  359214 cri.go:89] found id: ""
	I1213 10:40:49.972036  359214 logs.go:282] 0 containers: []
	W1213 10:40:49.972043  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:49.972048  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:49.972104  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:49.995832  359214 cri.go:89] found id: ""
	I1213 10:40:49.995847  359214 logs.go:282] 0 containers: []
	W1213 10:40:49.995854  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:49.995859  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:49.995917  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:50.025400  359214 cri.go:89] found id: ""
	I1213 10:40:50.025416  359214 logs.go:282] 0 containers: []
	W1213 10:40:50.025424  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:50.025430  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:50.025488  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:50.052197  359214 cri.go:89] found id: ""
	I1213 10:40:50.052213  359214 logs.go:282] 0 containers: []
	W1213 10:40:50.052222  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:50.052229  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:50.052290  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:50.079760  359214 cri.go:89] found id: ""
	I1213 10:40:50.079774  359214 logs.go:282] 0 containers: []
	W1213 10:40:50.079782  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:50.079788  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:50.079849  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:50.109349  359214 cri.go:89] found id: ""
	I1213 10:40:50.109364  359214 logs.go:282] 0 containers: []
	W1213 10:40:50.109372  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:50.109380  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:50.109390  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:50.165908  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:50.165929  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:50.184199  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:50.184216  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:50.252767  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:50.244722   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.245526   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.247105   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.247464   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.248997   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:50.244722   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.245526   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.247105   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.247464   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.248997   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:50.252777  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:50.252790  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:50.314222  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:50.314241  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:52.842532  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:52.853108  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:52.853184  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:52.880391  359214 cri.go:89] found id: ""
	I1213 10:40:52.880412  359214 logs.go:282] 0 containers: []
	W1213 10:40:52.880420  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:52.880426  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:52.880487  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:52.905175  359214 cri.go:89] found id: ""
	I1213 10:40:52.905189  359214 logs.go:282] 0 containers: []
	W1213 10:40:52.905197  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:52.905202  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:52.905279  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:52.934872  359214 cri.go:89] found id: ""
	I1213 10:40:52.934887  359214 logs.go:282] 0 containers: []
	W1213 10:40:52.934894  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:52.934900  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:52.934956  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:52.960307  359214 cri.go:89] found id: ""
	I1213 10:40:52.960321  359214 logs.go:282] 0 containers: []
	W1213 10:40:52.960329  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:52.960334  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:52.960390  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:52.985363  359214 cri.go:89] found id: ""
	I1213 10:40:52.985377  359214 logs.go:282] 0 containers: []
	W1213 10:40:52.985385  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:52.985390  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:52.985453  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:53.011565  359214 cri.go:89] found id: ""
	I1213 10:40:53.011581  359214 logs.go:282] 0 containers: []
	W1213 10:40:53.011589  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:53.011594  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:53.011657  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:53.036397  359214 cri.go:89] found id: ""
	I1213 10:40:53.036412  359214 logs.go:282] 0 containers: []
	W1213 10:40:53.036420  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:53.036428  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:53.036438  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:53.091583  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:53.091603  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:53.107990  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:53.108007  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:53.173876  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:53.164848   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.165601   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.167336   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.167976   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.169634   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:53.164848   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.165601   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.167336   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.167976   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.169634   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:53.173886  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:53.173897  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:53.238989  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:53.239009  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:55.773075  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:55.783512  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:55.783574  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:55.807988  359214 cri.go:89] found id: ""
	I1213 10:40:55.808002  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.808009  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:55.808014  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:55.808073  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:55.831609  359214 cri.go:89] found id: ""
	I1213 10:40:55.831624  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.831632  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:55.831637  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:55.831696  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:55.856162  359214 cri.go:89] found id: ""
	I1213 10:40:55.856177  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.856184  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:55.856190  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:55.856247  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:55.883604  359214 cri.go:89] found id: ""
	I1213 10:40:55.883619  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.883626  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:55.883631  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:55.883695  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:55.907679  359214 cri.go:89] found id: ""
	I1213 10:40:55.907694  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.907701  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:55.907706  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:55.907764  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:55.932970  359214 cri.go:89] found id: ""
	I1213 10:40:55.932984  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.932991  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:55.932996  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:55.933057  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:55.956837  359214 cri.go:89] found id: ""
	I1213 10:40:55.956851  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.956858  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:55.956866  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:55.956877  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:56.030354  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:56.021163   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.021989   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.023979   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.024615   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.026271   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:56.021163   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.021989   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.023979   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.024615   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.026271   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:56.030364  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:56.030376  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:56.092205  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:56.092226  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:56.119616  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:56.119633  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:56.177084  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:56.177103  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:58.695794  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:58.706025  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:58.706086  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:58.729634  359214 cri.go:89] found id: ""
	I1213 10:40:58.729647  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.729654  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:58.729659  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:58.729718  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:58.753786  359214 cri.go:89] found id: ""
	I1213 10:40:58.753800  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.753808  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:58.753813  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:58.753874  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:58.778478  359214 cri.go:89] found id: ""
	I1213 10:40:58.778491  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.778498  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:58.778503  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:58.778560  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:58.803243  359214 cri.go:89] found id: ""
	I1213 10:40:58.803258  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.803274  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:58.803280  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:58.803342  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:58.827435  359214 cri.go:89] found id: ""
	I1213 10:40:58.827449  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.827457  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:58.827462  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:58.827526  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:58.852612  359214 cri.go:89] found id: ""
	I1213 10:40:58.852627  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.852635  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:58.852640  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:58.852702  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:58.879181  359214 cri.go:89] found id: ""
	I1213 10:40:58.879195  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.879202  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:58.879210  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:58.879224  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:58.940146  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:58.940166  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:58.969086  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:58.969104  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:59.027812  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:59.027832  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:59.044161  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:59.044180  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:59.107958  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:59.099940   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.100731   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.102281   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.102588   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.104070   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:59.099940   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.100731   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.102281   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.102588   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.104070   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:01.608222  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:01.619072  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:01.619137  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:01.644559  359214 cri.go:89] found id: ""
	I1213 10:41:01.644574  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.644582  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:01.644587  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:01.644690  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:01.673686  359214 cri.go:89] found id: ""
	I1213 10:41:01.673701  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.673709  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:01.673714  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:01.673776  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:01.700231  359214 cri.go:89] found id: ""
	I1213 10:41:01.700246  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.700253  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:01.700259  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:01.700317  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:01.729867  359214 cri.go:89] found id: ""
	I1213 10:41:01.729883  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.729890  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:01.729895  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:01.729954  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:01.754275  359214 cri.go:89] found id: ""
	I1213 10:41:01.754289  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.754297  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:01.754302  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:01.754362  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:01.780449  359214 cri.go:89] found id: ""
	I1213 10:41:01.780464  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.780472  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:01.780477  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:01.780533  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:01.806614  359214 cri.go:89] found id: ""
	I1213 10:41:01.806638  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.806646  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:01.806654  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:01.806666  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:01.872660  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:01.872681  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:01.908081  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:01.908099  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:01.965082  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:01.965103  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:01.982015  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:01.982033  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:02.054794  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:02.045518   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.046349   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.047002   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.048599   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.049133   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:02.045518   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.046349   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.047002   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.048599   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.049133   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:04.555147  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:04.565791  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:04.565856  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:04.591956  359214 cri.go:89] found id: ""
	I1213 10:41:04.591971  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.591978  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:04.591984  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:04.592045  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:04.615698  359214 cri.go:89] found id: ""
	I1213 10:41:04.615713  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.615720  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:04.615725  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:04.615786  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:04.640509  359214 cri.go:89] found id: ""
	I1213 10:41:04.640523  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.640531  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:04.640538  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:04.640596  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:04.665547  359214 cri.go:89] found id: ""
	I1213 10:41:04.665562  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.665569  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:04.665577  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:04.665637  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:04.690947  359214 cri.go:89] found id: ""
	I1213 10:41:04.690961  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.690969  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:04.690974  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:04.691037  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:04.720397  359214 cri.go:89] found id: ""
	I1213 10:41:04.720421  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.720429  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:04.720435  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:04.720492  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:04.750207  359214 cri.go:89] found id: ""
	I1213 10:41:04.750233  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.750241  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:04.750250  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:04.750261  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:04.814350  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:04.806033   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.806630   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.808181   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.808726   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.810316   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:04.806033   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.806630   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.808181   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.808726   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.810316   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:04.814360  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:04.814381  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:04.876775  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:04.876798  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:04.904820  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:04.904836  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:04.962939  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:04.962958  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:07.479750  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:07.489681  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:07.489740  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:07.516670  359214 cri.go:89] found id: ""
	I1213 10:41:07.516684  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.516691  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:07.516697  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:07.516754  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:07.541873  359214 cri.go:89] found id: ""
	I1213 10:41:07.541888  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.541895  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:07.541900  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:07.541958  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:07.567390  359214 cri.go:89] found id: ""
	I1213 10:41:07.567404  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.567411  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:07.567416  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:07.567476  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:07.595533  359214 cri.go:89] found id: ""
	I1213 10:41:07.595546  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.595553  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:07.595559  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:07.595624  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:07.619449  359214 cri.go:89] found id: ""
	I1213 10:41:07.619463  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.619470  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:07.619476  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:07.619535  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:07.646270  359214 cri.go:89] found id: ""
	I1213 10:41:07.646284  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.646291  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:07.646297  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:07.646356  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:07.671609  359214 cri.go:89] found id: ""
	I1213 10:41:07.671623  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.671630  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:07.671638  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:07.671648  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:07.726992  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:07.727010  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:07.743360  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:07.743377  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:07.805371  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:07.797570   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.797988   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.799538   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.799877   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.801379   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:07.797570   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.797988   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.799538   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.799877   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.801379   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:07.805381  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:07.805393  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:07.867093  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:07.867115  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:10.399083  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:10.409097  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:10.409158  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:10.444135  359214 cri.go:89] found id: ""
	I1213 10:41:10.444149  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.444157  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:10.444162  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:10.444224  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:10.476756  359214 cri.go:89] found id: ""
	I1213 10:41:10.476771  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.476778  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:10.476784  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:10.476842  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:10.501876  359214 cri.go:89] found id: ""
	I1213 10:41:10.501890  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.501898  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:10.501903  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:10.501962  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:10.526921  359214 cri.go:89] found id: ""
	I1213 10:41:10.526936  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.526943  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:10.526949  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:10.527008  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:10.560474  359214 cri.go:89] found id: ""
	I1213 10:41:10.560489  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.560496  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:10.560501  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:10.560560  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:10.589176  359214 cri.go:89] found id: ""
	I1213 10:41:10.589190  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.589209  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:10.589215  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:10.589301  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:10.614119  359214 cri.go:89] found id: ""
	I1213 10:41:10.614139  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.614146  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:10.614155  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:10.614165  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:10.669835  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:10.669856  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:10.687547  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:10.687564  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:10.753151  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:10.744373   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.744993   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.747393   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.747860   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.749416   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:10.744373   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.744993   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.747393   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.747860   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.749416   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:10.753161  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:10.753175  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:10.825142  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:10.825173  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:13.352978  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:13.363579  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:13.363649  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:13.392544  359214 cri.go:89] found id: ""
	I1213 10:41:13.392558  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.392565  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:13.392571  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:13.392668  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:13.431393  359214 cri.go:89] found id: ""
	I1213 10:41:13.431407  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.431424  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:13.431430  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:13.431498  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:13.467012  359214 cri.go:89] found id: ""
	I1213 10:41:13.467027  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.467034  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:13.467040  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:13.467114  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:13.495958  359214 cri.go:89] found id: ""
	I1213 10:41:13.495972  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.495990  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:13.495996  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:13.496061  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:13.521376  359214 cri.go:89] found id: ""
	I1213 10:41:13.521399  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.521408  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:13.521413  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:13.521480  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:13.548831  359214 cri.go:89] found id: ""
	I1213 10:41:13.548845  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.548852  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:13.548858  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:13.548920  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:13.574611  359214 cri.go:89] found id: ""
	I1213 10:41:13.574626  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.574633  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:13.574661  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:13.574673  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:13.631156  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:13.631175  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:13.647668  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:13.647685  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:13.712729  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:13.703922   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.704556   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.706153   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.706621   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.708279   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:13.703922   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.704556   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.706153   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.706621   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.708279   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:13.712740  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:13.712752  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:13.776779  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:13.776799  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:16.310332  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:16.320699  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:16.320761  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:16.344441  359214 cri.go:89] found id: ""
	I1213 10:41:16.344455  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.344462  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:16.344468  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:16.344529  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:16.372703  359214 cri.go:89] found id: ""
	I1213 10:41:16.372717  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.372725  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:16.372730  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:16.372789  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:16.397701  359214 cri.go:89] found id: ""
	I1213 10:41:16.397715  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.397723  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:16.397728  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:16.397785  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:16.436711  359214 cri.go:89] found id: ""
	I1213 10:41:16.436726  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.436733  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:16.436739  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:16.436795  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:16.471220  359214 cri.go:89] found id: ""
	I1213 10:41:16.471235  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.471243  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:16.471248  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:16.471306  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:16.498773  359214 cri.go:89] found id: ""
	I1213 10:41:16.498788  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.498796  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:16.498801  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:16.498861  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:16.523734  359214 cri.go:89] found id: ""
	I1213 10:41:16.523749  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.523756  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:16.523764  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:16.523775  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:16.554346  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:16.554364  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:16.610645  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:16.610665  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:16.626953  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:16.626970  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:16.691344  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:16.682639   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.683311   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.685086   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.685793   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.687420   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:16.682639   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.683311   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.685086   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.685793   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.687420   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:16.691354  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:16.691367  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:19.255129  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:19.265879  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:19.265940  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:19.291837  359214 cri.go:89] found id: ""
	I1213 10:41:19.291851  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.291859  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:19.291864  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:19.291923  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:19.315964  359214 cri.go:89] found id: ""
	I1213 10:41:19.315978  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.315985  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:19.315990  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:19.316046  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:19.343352  359214 cri.go:89] found id: ""
	I1213 10:41:19.343366  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.343373  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:19.343378  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:19.343434  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:19.367745  359214 cri.go:89] found id: ""
	I1213 10:41:19.367760  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.367767  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:19.367773  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:19.367830  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:19.391416  359214 cri.go:89] found id: ""
	I1213 10:41:19.391429  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.391437  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:19.391442  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:19.391503  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:19.420969  359214 cri.go:89] found id: ""
	I1213 10:41:19.420982  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.420989  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:19.420995  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:19.421051  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:19.459512  359214 cri.go:89] found id: ""
	I1213 10:41:19.459528  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.459536  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:19.459544  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:19.459555  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:19.490208  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:19.490224  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:19.546240  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:19.546261  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:19.562645  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:19.562664  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:19.625588  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:19.617541   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.617927   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.619446   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.619795   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.621463   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:19.617541   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.617927   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.619446   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.619795   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.621463   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:19.625599  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:19.625610  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:22.187966  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:22.198583  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:22.198650  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:22.223213  359214 cri.go:89] found id: ""
	I1213 10:41:22.223227  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.223240  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:22.223246  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:22.223303  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:22.248552  359214 cri.go:89] found id: ""
	I1213 10:41:22.248567  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.248574  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:22.248579  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:22.248641  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:22.273682  359214 cri.go:89] found id: ""
	I1213 10:41:22.273697  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.273714  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:22.273720  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:22.273802  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:22.299868  359214 cri.go:89] found id: ""
	I1213 10:41:22.299883  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.299891  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:22.299896  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:22.299962  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:22.325309  359214 cri.go:89] found id: ""
	I1213 10:41:22.325324  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.325331  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:22.325337  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:22.325399  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:22.354179  359214 cri.go:89] found id: ""
	I1213 10:41:22.354193  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.354200  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:22.354205  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:22.354261  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:22.378958  359214 cri.go:89] found id: ""
	I1213 10:41:22.378980  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.378987  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:22.378997  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:22.379007  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:22.440927  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:22.440949  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:22.460102  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:22.460120  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:22.529575  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:22.521290   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.521799   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.523477   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.524007   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.525558   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:22.521290   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.521799   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.523477   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.524007   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.525558   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:22.529585  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:22.529595  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:22.592904  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:22.592925  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:25.122090  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:25.132657  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:25.132721  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:25.159021  359214 cri.go:89] found id: ""
	I1213 10:41:25.159036  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.159044  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:25.159049  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:25.159111  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:25.185666  359214 cri.go:89] found id: ""
	I1213 10:41:25.185691  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.185700  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:25.185706  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:25.185787  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:25.211201  359214 cri.go:89] found id: ""
	I1213 10:41:25.211216  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.211223  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:25.211228  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:25.211288  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:25.241164  359214 cri.go:89] found id: ""
	I1213 10:41:25.241178  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.241185  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:25.241191  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:25.241259  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:25.266721  359214 cri.go:89] found id: ""
	I1213 10:41:25.266737  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.266745  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:25.266751  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:25.266815  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:25.292241  359214 cri.go:89] found id: ""
	I1213 10:41:25.292255  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.292263  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:25.292272  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:25.292332  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:25.317411  359214 cri.go:89] found id: ""
	I1213 10:41:25.317441  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.317450  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:25.317458  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:25.317469  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:25.373328  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:25.373348  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:25.390032  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:25.390057  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:25.483290  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:25.471186   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.471638   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.473963   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.474270   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.475696   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:25.471186   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.471638   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.473963   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.474270   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.475696   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:25.483300  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:25.483311  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:25.544908  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:25.544930  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:28.078163  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:28.091034  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:28.091099  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:28.115911  359214 cri.go:89] found id: ""
	I1213 10:41:28.115925  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.115934  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:28.115940  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:28.116004  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:28.139316  359214 cri.go:89] found id: ""
	I1213 10:41:28.139330  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.139338  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:28.139343  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:28.139399  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:28.164405  359214 cri.go:89] found id: ""
	I1213 10:41:28.164420  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.164427  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:28.164434  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:28.164494  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:28.193103  359214 cri.go:89] found id: ""
	I1213 10:41:28.193117  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.193130  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:28.193136  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:28.193191  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:28.218193  359214 cri.go:89] found id: ""
	I1213 10:41:28.218207  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.218214  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:28.218219  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:28.218277  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:28.246727  359214 cri.go:89] found id: ""
	I1213 10:41:28.246741  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.246748  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:28.246754  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:28.246828  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:28.272720  359214 cri.go:89] found id: ""
	I1213 10:41:28.272735  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.272753  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:28.272761  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:28.272771  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:28.329731  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:28.329751  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:28.345935  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:28.345953  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:28.409004  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:28.400511   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.401329   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.403117   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.403657   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.404653   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:28.400511   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.401329   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.403117   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.403657   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.404653   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:28.409014  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:28.409024  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:28.475582  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:28.475603  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:31.008193  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:31.019100  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:31.019165  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:31.043886  359214 cri.go:89] found id: ""
	I1213 10:41:31.043907  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.043915  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:31.043921  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:31.043987  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:31.069993  359214 cri.go:89] found id: ""
	I1213 10:41:31.070008  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.070016  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:31.070022  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:31.070089  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:31.098048  359214 cri.go:89] found id: ""
	I1213 10:41:31.098075  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.098083  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:31.098089  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:31.098161  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:31.123592  359214 cri.go:89] found id: ""
	I1213 10:41:31.123608  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.123616  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:31.123621  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:31.123686  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:31.151147  359214 cri.go:89] found id: ""
	I1213 10:41:31.151163  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.151171  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:31.151177  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:31.151244  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:31.181236  359214 cri.go:89] found id: ""
	I1213 10:41:31.181257  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.181265  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:31.181270  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:31.181332  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:31.210269  359214 cri.go:89] found id: ""
	I1213 10:41:31.210283  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.210303  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:31.210311  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:31.210325  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:31.227244  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:31.227261  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:31.293720  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:31.285094   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.285961   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.287612   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.287962   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.289354   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:31.285094   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.285961   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.287612   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.287962   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.289354   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:31.293731  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:31.293745  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:31.357626  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:31.357648  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:31.386271  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:31.386288  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:33.948226  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:33.958367  359214 kubeadm.go:602] duration metric: took 4m4.333187147s to restartPrimaryControlPlane
	W1213 10:41:33.958431  359214 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 10:41:33.958502  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 10:41:34.375262  359214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:41:34.388893  359214 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:41:34.396960  359214 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:41:34.397012  359214 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:41:34.404696  359214 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:41:34.404706  359214 kubeadm.go:158] found existing configuration files:
	
	I1213 10:41:34.404755  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:41:34.412350  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:41:34.412405  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:41:34.419971  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:41:34.427828  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:41:34.427887  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:41:34.435644  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:41:34.443354  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:41:34.443408  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:41:34.451024  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:41:34.458860  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:41:34.458918  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:41:34.466249  359214 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:41:34.504797  359214 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:41:34.504845  359214 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:41:34.587434  359214 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:41:34.587499  359214 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:41:34.587534  359214 kubeadm.go:319] OS: Linux
	I1213 10:41:34.587577  359214 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:41:34.587624  359214 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:41:34.587670  359214 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:41:34.587717  359214 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:41:34.587764  359214 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:41:34.587816  359214 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:41:34.587860  359214 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:41:34.587906  359214 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:41:34.587951  359214 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:41:34.656000  359214 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:41:34.656112  359214 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:41:34.656196  359214 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:41:34.661831  359214 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:41:34.665544  359214 out.go:252]   - Generating certificates and keys ...
	I1213 10:41:34.665620  359214 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:41:34.665681  359214 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:41:34.665752  359214 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:41:34.665808  359214 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:41:34.665873  359214 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:41:34.665922  359214 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:41:34.665981  359214 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:41:34.666037  359214 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:41:34.666107  359214 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:41:34.666174  359214 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:41:34.666208  359214 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:41:34.666259  359214 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:41:35.121283  359214 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:41:35.663053  359214 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:41:35.746928  359214 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:41:35.962879  359214 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:41:36.165716  359214 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:41:36.166361  359214 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:41:36.169355  359214 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:41:36.172503  359214 out.go:252]   - Booting up control plane ...
	I1213 10:41:36.172623  359214 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:41:36.172875  359214 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:41:36.174488  359214 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:41:36.195010  359214 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:41:36.195108  359214 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:41:36.203505  359214 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:41:36.203828  359214 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:41:36.204072  359214 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:41:36.339853  359214 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:41:36.339968  359214 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:45:36.340589  359214 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00099636s
	I1213 10:45:36.340614  359214 kubeadm.go:319] 
	I1213 10:45:36.340667  359214 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:45:36.340697  359214 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:45:36.340795  359214 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:45:36.340800  359214 kubeadm.go:319] 
	I1213 10:45:36.340897  359214 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:45:36.340926  359214 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:45:36.340953  359214 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:45:36.340956  359214 kubeadm.go:319] 
	I1213 10:45:36.344674  359214 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:45:36.345121  359214 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:45:36.345236  359214 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:45:36.345471  359214 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:45:36.345476  359214 kubeadm.go:319] 
	I1213 10:45:36.345548  359214 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 10:45:36.345669  359214 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00099636s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 10:45:36.345754  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 10:45:36.752142  359214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:45:36.765694  359214 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:45:36.765753  359214 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:45:36.773442  359214 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:45:36.773451  359214 kubeadm.go:158] found existing configuration files:
	
	I1213 10:45:36.773504  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:45:36.781648  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:45:36.781706  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:45:36.789406  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:45:36.797582  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:45:36.797641  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:45:36.805463  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:45:36.813325  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:45:36.813378  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:45:36.820926  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:45:36.828930  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:45:36.828988  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:45:36.836622  359214 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:45:36.877023  359214 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:45:36.877075  359214 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:45:36.946303  359214 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:45:36.946364  359214 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:45:36.946398  359214 kubeadm.go:319] OS: Linux
	I1213 10:45:36.946444  359214 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:45:36.946489  359214 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:45:36.946532  359214 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:45:36.946576  359214 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:45:36.946620  359214 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:45:36.946665  359214 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:45:36.946727  359214 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:45:36.946771  359214 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:45:36.946813  359214 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:45:37.023251  359214 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:45:37.023367  359214 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:45:37.023453  359214 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:45:37.035188  359214 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:45:37.040505  359214 out.go:252]   - Generating certificates and keys ...
	I1213 10:45:37.040588  359214 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:45:37.040657  359214 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:45:37.040732  359214 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:45:37.040792  359214 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:45:37.040860  359214 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:45:37.040912  359214 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:45:37.040974  359214 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:45:37.041034  359214 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:45:37.041112  359214 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:45:37.041183  359214 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:45:37.041219  359214 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:45:37.041274  359214 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:45:37.085508  359214 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:45:37.524146  359214 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:45:37.643175  359214 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:45:38.077377  359214 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:45:38.482147  359214 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:45:38.482682  359214 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:45:38.485202  359214 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:45:38.490562  359214 out.go:252]   - Booting up control plane ...
	I1213 10:45:38.490673  359214 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:45:38.490778  359214 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:45:38.490854  359214 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:45:38.510040  359214 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:45:38.510136  359214 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:45:38.518983  359214 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:45:38.519096  359214 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:45:38.519153  359214 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:45:38.652209  359214 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:45:38.652350  359214 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:49:38.651567  359214 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001187482s
	I1213 10:49:38.651592  359214 kubeadm.go:319] 
	I1213 10:49:38.651654  359214 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:49:38.651686  359214 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:49:38.651792  359214 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:49:38.651797  359214 kubeadm.go:319] 
	I1213 10:49:38.651939  359214 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:49:38.651995  359214 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:49:38.652034  359214 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:49:38.652037  359214 kubeadm.go:319] 
	I1213 10:49:38.656860  359214 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:49:38.657251  359214 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:49:38.657352  359214 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:49:38.657572  359214 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:49:38.657576  359214 kubeadm.go:319] 
	I1213 10:49:38.657639  359214 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:49:38.657718  359214 kubeadm.go:403] duration metric: took 12m9.068082439s to StartCluster
	I1213 10:49:38.657750  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:49:38.657821  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:49:38.689768  359214 cri.go:89] found id: ""
	I1213 10:49:38.689783  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.689798  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:49:38.689803  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:49:38.689865  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:49:38.719427  359214 cri.go:89] found id: ""
	I1213 10:49:38.719441  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.719449  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:49:38.719455  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:49:38.719513  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:49:38.747452  359214 cri.go:89] found id: ""
	I1213 10:49:38.747466  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.747474  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:49:38.747480  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:49:38.747544  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:49:38.772270  359214 cri.go:89] found id: ""
	I1213 10:49:38.772286  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.772293  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:49:38.772298  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:49:38.772358  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:49:38.796548  359214 cri.go:89] found id: ""
	I1213 10:49:38.796562  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.796570  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:49:38.796575  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:49:38.796633  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:49:38.825383  359214 cri.go:89] found id: ""
	I1213 10:49:38.825397  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.825404  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:49:38.825410  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:49:38.825467  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:49:38.854743  359214 cri.go:89] found id: ""
	I1213 10:49:38.854758  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.854765  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:49:38.854775  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:49:38.854785  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:49:38.911438  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:49:38.911459  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:49:38.928194  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:49:38.928212  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:49:38.993056  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:49:38.985025   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.985836   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.987445   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.987763   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.989301   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:49:38.985025   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.985836   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.987445   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.987763   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.989301   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:49:38.993068  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:49:38.993079  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:49:39.059560  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:49:39.059584  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:49:39.090490  359214 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:49:39.090521  359214 out.go:285] * 
	W1213 10:49:39.090586  359214 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:49:39.090603  359214 out.go:285] * 
	W1213 10:49:39.092733  359214 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:49:39.097735  359214 out.go:203] 
	W1213 10:49:39.101721  359214 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:49:39.101772  359214 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:49:39.101799  359214 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:49:39.104924  359214 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861227644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861318114Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861438764Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861513571Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861578449Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861642483Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861707304Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861776350Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861845545Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861934818Z" level=info msg="Connect containerd service"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.862289545Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.862951451Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.874919104Z" level=info msg="Start subscribing containerd event"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.875103516Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.875569851Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.881349344Z" level=info msg="Start recovering state"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.920785039Z" level=info msg="Start event monitor"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921012364Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921112731Z" level=info msg="Start streaming server"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921198171Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921421730Z" level=info msg="runtime interface starting up..."
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921496201Z" level=info msg="starting plugins..."
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921561104Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:37:27 functional-652709 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.922785206Z" level=info msg="containerd successfully booted in 0.088911s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:49:42.600610   21259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:42.601079   21259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:42.603014   21259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:42.603428   21259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:42.605017   21259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +12.907693] overlayfs: idmapped layers are currently not supported
	[Dec13 09:25] overlayfs: idmapped layers are currently not supported
	[ +26.192425] overlayfs: idmapped layers are currently not supported
	[Dec13 09:26] overlayfs: idmapped layers are currently not supported
	[ +25.729788] overlayfs: idmapped layers are currently not supported
	[Dec13 09:27] overlayfs: idmapped layers are currently not supported
	[Dec13 09:28] overlayfs: idmapped layers are currently not supported
	[Dec13 09:31] overlayfs: idmapped layers are currently not supported
	[Dec13 09:32] overlayfs: idmapped layers are currently not supported
	[ +17.979093] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec13 09:43] overlayfs: idmapped layers are currently not supported
	[Dec13 09:45] overlayfs: idmapped layers are currently not supported
	[ +25.885102] overlayfs: idmapped layers are currently not supported
	[Dec13 09:46] overlayfs: idmapped layers are currently not supported
	[ +22.078149] overlayfs: idmapped layers are currently not supported
	[Dec13 09:47] overlayfs: idmapped layers are currently not supported
	[Dec13 09:48] overlayfs: idmapped layers are currently not supported
	[Dec13 09:49] overlayfs: idmapped layers are currently not supported
	[Dec13 09:51] overlayfs: idmapped layers are currently not supported
	[ +17.043564] overlayfs: idmapped layers are currently not supported
	[Dec13 09:52] overlayfs: idmapped layers are currently not supported
	[Dec13 09:53] overlayfs: idmapped layers are currently not supported
	[Dec13 10:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:19] hrtimer: interrupt took 21247146 ns
	
	
	==> kernel <==
	 10:49:42 up  3:32,  0 user,  load average: 0.11, 0.19, 0.47
	Linux functional-652709 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:49:39 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:49:40 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 13 10:49:40 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:40 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:40 functional-652709 kubelet[21092]: E1213 10:49:40.223722   21092 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:49:40 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:49:40 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:49:40 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 13 10:49:40 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:40 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:40 functional-652709 kubelet[21134]: E1213 10:49:40.976012   21134 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:49:40 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:49:40 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:49:41 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 13 10:49:41 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:41 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:41 functional-652709 kubelet[21167]: E1213 10:49:41.673120   21167 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:49:41 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:49:41 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:49:42 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 13 10:49:42 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:42 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:42 functional-652709 kubelet[21229]: E1213 10:49:42.474086   21229 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:49:42 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:49:42 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709: exit status 2 (383.03009ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-652709" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-652709 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-652709 apply -f testdata/invalidsvc.yaml: exit status 1 (60.119808ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-652709 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-652709 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-652709 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-652709 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-652709 --alsologtostderr -v=1] stderr:
I1213 10:52:06.702075  376834 out.go:360] Setting OutFile to fd 1 ...
I1213 10:52:06.702211  376834 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:52:06.702220  376834 out.go:374] Setting ErrFile to fd 2...
I1213 10:52:06.702224  376834 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:52:06.702473  376834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
I1213 10:52:06.702770  376834 mustload.go:66] Loading cluster: functional-652709
I1213 10:52:06.703231  376834 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 10:52:06.703711  376834 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
I1213 10:52:06.723110  376834 host.go:66] Checking if "functional-652709" exists ...
I1213 10:52:06.723431  376834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1213 10:52:06.776926  376834 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:52:06.76776996 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:
/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1213 10:52:06.777049  376834 api_server.go:166] Checking apiserver status ...
I1213 10:52:06.777114  376834 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1213 10:52:06.777164  376834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
I1213 10:52:06.794140  376834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
W1213 10:52:06.900296  376834 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1213 10:52:06.903708  376834 out.go:179] * The control-plane node functional-652709 apiserver is not running: (state=Stopped)
I1213 10:52:06.906657  376834 out.go:179]   To start a cluster, run: "minikube start -p functional-652709"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-652709
helpers_test.go:244: (dbg) docker inspect functional-652709:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f",
	        "Created": "2025-12-13T10:22:44.366993781Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 347931,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:22:44.437030763Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/hosts",
	        "LogPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f-json.log",
	        "Name": "/functional-652709",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-652709:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-652709",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f",
	                "LowerDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095-init/diff:/var/lib/docker/overlay2/5d0ca86320f2c06765995d644a73af30cd6ddb36f5c1a2a6b1ebb24af53cdabd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/merged",
	                "UpperDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/diff",
	                "WorkDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-652709",
	                "Source": "/var/lib/docker/volumes/functional-652709/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-652709",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-652709",
	                "name.minikube.sigs.k8s.io": "functional-652709",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "52e527b5bd789a02eb7efb651200033ed4929e5fc7545e9df042d3f777cc9782",
	            "SandboxKey": "/var/run/docker/netns/52e527b5bd78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-652709": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:23:08:9e:cb:13",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "344f2b940117dadb28d1ef1328f911c0446307288fdfafebfe59f38e473f79cb",
	                    "EndpointID": "8954f96e5987202be5715e7023384fe862744778b2520bccba28c57814f0980f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-652709",
	                        "0f6101071ca2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-652709 -n functional-652709
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-652709 -n functional-652709: exit status 2 (322.399819ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-652709 service hello-node --url --format={{.IP}}                                                                                         │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │                     │
	│ service   │ functional-652709 service hello-node --url                                                                                                          │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │                     │
	│ mount     │ -p functional-652709 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2540388801/001:/mount-9p --alsologtostderr -v=1              │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │                     │
	│ ssh       │ functional-652709 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │                     │
	│ ssh       │ functional-652709 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │ 13 Dec 25 10:51 UTC │
	│ ssh       │ functional-652709 ssh -- ls -la /mount-9p                                                                                                           │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │ 13 Dec 25 10:51 UTC │
	│ ssh       │ functional-652709 ssh cat /mount-9p/test-1765623116872019606                                                                                        │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │ 13 Dec 25 10:51 UTC │
	│ ssh       │ functional-652709 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │                     │
	│ ssh       │ functional-652709 ssh sudo umount -f /mount-9p                                                                                                      │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │ 13 Dec 25 10:51 UTC │
	│ mount     │ -p functional-652709 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2196735515/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │                     │
	│ ssh       │ functional-652709 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │                     │
	│ ssh       │ functional-652709 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ ssh       │ functional-652709 ssh -- ls -la /mount-9p                                                                                                           │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ ssh       │ functional-652709 ssh sudo umount -f /mount-9p                                                                                                      │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │                     │
	│ mount     │ -p functional-652709 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1116597745/001:/mount1 --alsologtostderr -v=1                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │                     │
	│ mount     │ -p functional-652709 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1116597745/001:/mount3 --alsologtostderr -v=1                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │                     │
	│ mount     │ -p functional-652709 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1116597745/001:/mount2 --alsologtostderr -v=1                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │                     │
	│ ssh       │ functional-652709 ssh findmnt -T /mount1                                                                                                            │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ ssh       │ functional-652709 ssh findmnt -T /mount2                                                                                                            │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ ssh       │ functional-652709 ssh findmnt -T /mount3                                                                                                            │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ mount     │ -p functional-652709 --kill=true                                                                                                                    │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │                     │
	│ start     │ -p functional-652709 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │                     │
	│ start     │ -p functional-652709 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │                     │
	│ start     │ -p functional-652709 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-652709 --alsologtostderr -v=1                                                                                      │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:52:06
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:52:06.450755  376758 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:52:06.450873  376758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:52:06.450913  376758 out.go:374] Setting ErrFile to fd 2...
	I1213 10:52:06.450919  376758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:52:06.451185  376758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 10:52:06.451584  376758 out.go:368] Setting JSON to false
	I1213 10:52:06.452450  376758 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12879,"bootTime":1765610247,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 10:52:06.452554  376758 start.go:143] virtualization:  
	I1213 10:52:06.457685  376758 out.go:179] * [functional-652709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:52:06.460878  376758 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:52:06.460951  376758 notify.go:221] Checking for updates...
	I1213 10:52:06.467700  376758 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:52:06.470736  376758 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:52:06.473591  376758 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 10:52:06.476443  376758 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:52:06.479409  376758 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:52:06.482813  376758 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:52:06.483365  376758 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:52:06.512382  376758 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:52:06.512519  376758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:52:06.577177  376758 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:52:06.565738318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:52:06.577300  376758 docker.go:319] overlay module found
	I1213 10:52:06.580486  376758 out.go:179] * Using the docker driver based on existing profile
	I1213 10:52:06.583327  376758 start.go:309] selected driver: docker
	I1213 10:52:06.583358  376758 start.go:927] validating driver "docker" against &{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:52:06.583472  376758 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:52:06.583587  376758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:52:06.648334  376758 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:52:06.639010814 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:52:06.648776  376758 cni.go:84] Creating CNI manager for ""
	I1213 10:52:06.648844  376758 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:52:06.648894  376758 start.go:353] cluster config:
	{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:52:06.652023  376758 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861227644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861318114Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861438764Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861513571Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861578449Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861642483Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861707304Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861776350Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861845545Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861934818Z" level=info msg="Connect containerd service"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.862289545Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.862951451Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.874919104Z" level=info msg="Start subscribing containerd event"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.875103516Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.875569851Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.881349344Z" level=info msg="Start recovering state"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.920785039Z" level=info msg="Start event monitor"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921012364Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921112731Z" level=info msg="Start streaming server"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921198171Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921421730Z" level=info msg="runtime interface starting up..."
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921496201Z" level=info msg="starting plugins..."
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921561104Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:37:27 functional-652709 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.922785206Z" level=info msg="containerd successfully booted in 0.088911s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:07.953588   23474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:07.954437   23474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:07.956034   23474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:07.956573   23474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:07.958126   23474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +12.907693] overlayfs: idmapped layers are currently not supported
	[Dec13 09:25] overlayfs: idmapped layers are currently not supported
	[ +26.192425] overlayfs: idmapped layers are currently not supported
	[Dec13 09:26] overlayfs: idmapped layers are currently not supported
	[ +25.729788] overlayfs: idmapped layers are currently not supported
	[Dec13 09:27] overlayfs: idmapped layers are currently not supported
	[Dec13 09:28] overlayfs: idmapped layers are currently not supported
	[Dec13 09:31] overlayfs: idmapped layers are currently not supported
	[Dec13 09:32] overlayfs: idmapped layers are currently not supported
	[ +17.979093] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec13 09:43] overlayfs: idmapped layers are currently not supported
	[Dec13 09:45] overlayfs: idmapped layers are currently not supported
	[ +25.885102] overlayfs: idmapped layers are currently not supported
	[Dec13 09:46] overlayfs: idmapped layers are currently not supported
	[ +22.078149] overlayfs: idmapped layers are currently not supported
	[Dec13 09:47] overlayfs: idmapped layers are currently not supported
	[Dec13 09:48] overlayfs: idmapped layers are currently not supported
	[Dec13 09:49] overlayfs: idmapped layers are currently not supported
	[Dec13 09:51] overlayfs: idmapped layers are currently not supported
	[ +17.043564] overlayfs: idmapped layers are currently not supported
	[Dec13 09:52] overlayfs: idmapped layers are currently not supported
	[Dec13 09:53] overlayfs: idmapped layers are currently not supported
	[Dec13 10:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:19] hrtimer: interrupt took 21247146 ns
	
	
	==> kernel <==
	 10:52:08 up  3:34,  0 user,  load average: 1.32, 0.47, 0.52
	Linux functional-652709 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:52:04 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:52:05 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 516.
	Dec 13 10:52:05 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:05 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:05 functional-652709 kubelet[23337]: E1213 10:52:05.729142   23337 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:52:05 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:52:05 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:52:06 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 517.
	Dec 13 10:52:06 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:06 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:06 functional-652709 kubelet[23358]: E1213 10:52:06.472812   23358 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:52:06 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:52:06 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:52:07 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 518.
	Dec 13 10:52:07 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:07 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:07 functional-652709 kubelet[23374]: E1213 10:52:07.227856   23374 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:52:07 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:52:07 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:52:07 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 519.
	Dec 13 10:52:07 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:07 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:07 functional-652709 kubelet[23478]: E1213 10:52:07.975930   23478 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:52:07 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:52:07 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709: exit status 2 (322.550712ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-652709" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-652709 status: exit status 2 (344.590675ms)

                                                
                                                
-- stdout --
	functional-652709
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-652709 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-652709 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (344.024679ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Running,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-652709 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-652709 status -o json: exit status 2 (321.074769ms)

                                                
                                                
-- stdout --
	{"Name":"functional-652709","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-652709 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-652709
helpers_test.go:244: (dbg) docker inspect functional-652709:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f",
	        "Created": "2025-12-13T10:22:44.366993781Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 347931,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:22:44.437030763Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/hosts",
	        "LogPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f-json.log",
	        "Name": "/functional-652709",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-652709:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-652709",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f",
	                "LowerDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095-init/diff:/var/lib/docker/overlay2/5d0ca86320f2c06765995d644a73af30cd6ddb36f5c1a2a6b1ebb24af53cdabd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/merged",
	                "UpperDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/diff",
	                "WorkDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-652709",
	                "Source": "/var/lib/docker/volumes/functional-652709/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-652709",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-652709",
	                "name.minikube.sigs.k8s.io": "functional-652709",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "52e527b5bd789a02eb7efb651200033ed4929e5fc7545e9df042d3f777cc9782",
	            "SandboxKey": "/var/run/docker/netns/52e527b5bd78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-652709": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:23:08:9e:cb:13",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "344f2b940117dadb28d1ef1328f911c0446307288fdfafebfe59f38e473f79cb",
	                    "EndpointID": "8954f96e5987202be5715e7023384fe862744778b2520bccba28c57814f0980f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-652709",
	                        "0f6101071ca2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-652709 -n functional-652709
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-652709 -n functional-652709: exit status 2 (343.349497ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ functional-652709 addons list -o json                                                                                                               │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │ 13 Dec 25 10:51 UTC │
	│ service │ functional-652709 service list                                                                                                                      │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │                     │
	│ service │ functional-652709 service list -o json                                                                                                              │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │                     │
	│ service │ functional-652709 service --namespace=default --https --url hello-node                                                                              │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │                     │
	│ service │ functional-652709 service hello-node --url --format={{.IP}}                                                                                         │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │                     │
	│ service │ functional-652709 service hello-node --url                                                                                                          │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │                     │
	│ mount   │ -p functional-652709 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2540388801/001:/mount-9p --alsologtostderr -v=1              │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │                     │
	│ ssh     │ functional-652709 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │                     │
	│ ssh     │ functional-652709 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │ 13 Dec 25 10:51 UTC │
	│ ssh     │ functional-652709 ssh -- ls -la /mount-9p                                                                                                           │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │ 13 Dec 25 10:51 UTC │
	│ ssh     │ functional-652709 ssh cat /mount-9p/test-1765623116872019606                                                                                        │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │ 13 Dec 25 10:51 UTC │
	│ ssh     │ functional-652709 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │                     │
	│ ssh     │ functional-652709 ssh sudo umount -f /mount-9p                                                                                                      │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │ 13 Dec 25 10:51 UTC │
	│ mount   │ -p functional-652709 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2196735515/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │                     │
	│ ssh     │ functional-652709 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │                     │
	│ ssh     │ functional-652709 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ ssh     │ functional-652709 ssh -- ls -la /mount-9p                                                                                                           │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ ssh     │ functional-652709 ssh sudo umount -f /mount-9p                                                                                                      │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │                     │
	│ mount   │ -p functional-652709 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1116597745/001:/mount1 --alsologtostderr -v=1                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │                     │
	│ mount   │ -p functional-652709 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1116597745/001:/mount3 --alsologtostderr -v=1                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │                     │
	│ mount   │ -p functional-652709 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1116597745/001:/mount2 --alsologtostderr -v=1                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │                     │
	│ ssh     │ functional-652709 ssh findmnt -T /mount1                                                                                                            │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ ssh     │ functional-652709 ssh findmnt -T /mount2                                                                                                            │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ ssh     │ functional-652709 ssh findmnt -T /mount3                                                                                                            │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ mount   │ -p functional-652709 --kill=true                                                                                                                    │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:37:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:37:25.138350  359214 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:37:25.138465  359214 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:37:25.138469  359214 out.go:374] Setting ErrFile to fd 2...
	I1213 10:37:25.138473  359214 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:37:25.138742  359214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 10:37:25.139091  359214 out.go:368] Setting JSON to false
	I1213 10:37:25.139911  359214 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11998,"bootTime":1765610247,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 10:37:25.139964  359214 start.go:143] virtualization:  
	I1213 10:37:25.143535  359214 out.go:179] * [functional-652709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:37:25.146407  359214 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:37:25.146500  359214 notify.go:221] Checking for updates...
	I1213 10:37:25.152371  359214 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:37:25.155287  359214 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:37:25.158064  359214 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 10:37:25.162885  359214 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:37:25.165865  359214 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:37:25.169282  359214 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:37:25.169378  359214 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:37:25.203946  359214 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:37:25.204073  359214 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:37:25.282140  359214 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 10:37:25.272517516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:37:25.282233  359214 docker.go:319] overlay module found
	I1213 10:37:25.285314  359214 out.go:179] * Using the docker driver based on existing profile
	I1213 10:37:25.288091  359214 start.go:309] selected driver: docker
	I1213 10:37:25.288098  359214 start.go:927] validating driver "docker" against &{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:37:25.288215  359214 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:37:25.288310  359214 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:37:25.346233  359214 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 10:37:25.336833323 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:37:25.346649  359214 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:37:25.346672  359214 cni.go:84] Creating CNI manager for ""
	I1213 10:37:25.346746  359214 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:37:25.346788  359214 start.go:353] cluster config:
	{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:37:25.351648  359214 out.go:179] * Starting "functional-652709" primary control-plane node in "functional-652709" cluster
	I1213 10:37:25.354472  359214 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 10:37:25.357365  359214 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:37:25.360240  359214 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:37:25.360279  359214 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 10:37:25.360290  359214 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:37:25.360305  359214 cache.go:65] Caching tarball of preloaded images
	I1213 10:37:25.360390  359214 preload.go:238] Found /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 10:37:25.360398  359214 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 10:37:25.360508  359214 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/config.json ...
	I1213 10:37:25.379669  359214 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:37:25.379680  359214 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:37:25.379701  359214 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:37:25.379731  359214 start.go:360] acquireMachinesLock for functional-652709: {Name:mk6e8c40fbbb5af0bb2468340fd710875030300d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:37:25.379795  359214 start.go:364] duration metric: took 46.958µs to acquireMachinesLock for "functional-652709"
	I1213 10:37:25.379812  359214 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:37:25.379817  359214 fix.go:54] fixHost starting: 
	I1213 10:37:25.380078  359214 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:37:25.396614  359214 fix.go:112] recreateIfNeeded on functional-652709: state=Running err=<nil>
	W1213 10:37:25.396632  359214 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:37:25.399750  359214 out.go:252] * Updating the running docker "functional-652709" container ...
	I1213 10:37:25.399771  359214 machine.go:94] provisionDockerMachine start ...
	I1213 10:37:25.399844  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:25.416990  359214 main.go:143] libmachine: Using SSH client type: native
	I1213 10:37:25.417324  359214 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:37:25.417330  359214 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:37:25.566232  359214 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-652709
	
	I1213 10:37:25.566247  359214 ubuntu.go:182] provisioning hostname "functional-652709"
	I1213 10:37:25.566312  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:25.583930  359214 main.go:143] libmachine: Using SSH client type: native
	I1213 10:37:25.584239  359214 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:37:25.584247  359214 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-652709 && echo "functional-652709" | sudo tee /etc/hostname
	I1213 10:37:25.743712  359214 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-652709
	
	I1213 10:37:25.743781  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:25.761387  359214 main.go:143] libmachine: Using SSH client type: native
	I1213 10:37:25.761683  359214 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:37:25.761697  359214 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-652709' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-652709/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-652709' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:37:25.915528  359214 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:37:25.915543  359214 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 10:37:25.915567  359214 ubuntu.go:190] setting up certificates
	I1213 10:37:25.915589  359214 provision.go:84] configureAuth start
	I1213 10:37:25.915650  359214 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-652709
	I1213 10:37:25.937241  359214 provision.go:143] copyHostCerts
	I1213 10:37:25.937315  359214 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 10:37:25.937323  359214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 10:37:25.937397  359214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 10:37:25.937493  359214 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 10:37:25.937497  359214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 10:37:25.937521  359214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 10:37:25.937570  359214 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 10:37:25.937573  359214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 10:37:25.937593  359214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 10:37:25.937635  359214 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.functional-652709 san=[127.0.0.1 192.168.49.2 functional-652709 localhost minikube]
	I1213 10:37:26.244127  359214 provision.go:177] copyRemoteCerts
	I1213 10:37:26.244186  359214 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:37:26.244225  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:26.264658  359214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:37:26.370401  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:37:26.387044  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:37:26.404259  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:37:26.421389  359214 provision.go:87] duration metric: took 505.777833ms to configureAuth
	I1213 10:37:26.421407  359214 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:37:26.421614  359214 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:37:26.421620  359214 machine.go:97] duration metric: took 1.021844371s to provisionDockerMachine
	I1213 10:37:26.421627  359214 start.go:293] postStartSetup for "functional-652709" (driver="docker")
	I1213 10:37:26.421636  359214 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:37:26.421692  359214 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:37:26.421728  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:26.439115  359214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:37:26.542461  359214 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:37:26.545680  359214 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:37:26.545698  359214 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:37:26.545710  359214 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 10:37:26.545763  359214 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 10:37:26.545836  359214 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 10:37:26.545911  359214 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts -> hosts in /etc/test/nested/copy/308915
	I1213 10:37:26.545959  359214 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/308915
	I1213 10:37:26.553760  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 10:37:26.571190  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts --> /etc/test/nested/copy/308915/hosts (40 bytes)
	I1213 10:37:26.588882  359214 start.go:296] duration metric: took 167.239997ms for postStartSetup
	I1213 10:37:26.588951  359214 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:37:26.588988  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:26.606145  359214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:37:26.708907  359214 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:37:26.713681  359214 fix.go:56] duration metric: took 1.333856829s for fixHost
	I1213 10:37:26.713698  359214 start.go:83] releasing machines lock for "functional-652709", held for 1.333895015s
	I1213 10:37:26.713781  359214 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-652709
	I1213 10:37:26.733362  359214 ssh_runner.go:195] Run: cat /version.json
	I1213 10:37:26.733405  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:26.733670  359214 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:37:26.733727  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:26.755898  359214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:37:26.764378  359214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:37:26.858420  359214 ssh_runner.go:195] Run: systemctl --version
	I1213 10:37:26.952524  359214 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:37:26.956969  359214 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:37:26.957030  359214 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:37:26.964724  359214 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:37:26.964738  359214 start.go:496] detecting cgroup driver to use...
	I1213 10:37:26.964768  359214 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:37:26.964823  359214 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 10:37:26.980031  359214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:37:26.993058  359214 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:37:26.993140  359214 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:37:27.016019  359214 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:37:27.029352  359214 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:37:27.143876  359214 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:37:27.259911  359214 docker.go:234] disabling docker service ...
	I1213 10:37:27.259973  359214 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:37:27.275304  359214 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:37:27.288715  359214 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:37:27.403391  359214 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:37:27.538286  359214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:37:27.551384  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:37:27.565344  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:37:27.574020  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:37:27.583189  359214 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:37:27.583255  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:37:27.591895  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:37:27.600966  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:37:27.609996  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:37:27.618821  359214 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:37:27.626864  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:37:27.635612  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:37:27.644477  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:37:27.653477  359214 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:37:27.661005  359214 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:37:27.668365  359214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:37:27.776281  359214 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:37:27.924718  359214 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 10:37:27.924777  359214 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 10:37:27.928729  359214 start.go:564] Will wait 60s for crictl version
	I1213 10:37:27.928789  359214 ssh_runner.go:195] Run: which crictl
	I1213 10:37:27.932637  359214 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:37:27.956729  359214 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 10:37:27.956786  359214 ssh_runner.go:195] Run: containerd --version
	I1213 10:37:27.979747  359214 ssh_runner.go:195] Run: containerd --version
	I1213 10:37:28.007018  359214 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 10:37:28.009973  359214 cli_runner.go:164] Run: docker network inspect functional-652709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:37:28.026979  359214 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:37:28.034215  359214 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 10:37:28.037114  359214 kubeadm.go:884] updating cluster {Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:37:28.037277  359214 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:37:28.037366  359214 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:37:28.069735  359214 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:37:28.069748  359214 containerd.go:534] Images already preloaded, skipping extraction
	I1213 10:37:28.069804  359214 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:37:28.094782  359214 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:37:28.094795  359214 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:37:28.094801  359214 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 10:37:28.094901  359214 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-652709 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:37:28.094963  359214 ssh_runner.go:195] Run: sudo crictl info
	I1213 10:37:28.123071  359214 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 10:37:28.123096  359214 cni.go:84] Creating CNI manager for ""
	I1213 10:37:28.123104  359214 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:37:28.123112  359214 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:37:28.123134  359214 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-652709 NodeName:functional-652709 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:37:28.123244  359214 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-652709"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:37:28.123313  359214 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:37:28.131175  359214 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:37:28.131238  359214 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:37:28.138792  359214 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 10:37:28.151537  359214 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:37:28.169495  359214 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1213 10:37:28.184364  359214 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:37:28.188525  359214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:37:28.305096  359214 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:37:28.912534  359214 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709 for IP: 192.168.49.2
	I1213 10:37:28.912575  359214 certs.go:195] generating shared ca certs ...
	I1213 10:37:28.912591  359214 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:37:28.912719  359214 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 10:37:28.912771  359214 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 10:37:28.912778  359214 certs.go:257] generating profile certs ...
	I1213 10:37:28.912857  359214 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.key
	I1213 10:37:28.912917  359214 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key.86e7afd1
	I1213 10:37:28.912954  359214 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key
	I1213 10:37:28.913063  359214 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 10:37:28.913092  359214 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 10:37:28.913099  359214 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:37:28.913124  359214 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:37:28.913151  359214 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:37:28.913174  359214 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 10:37:28.913221  359214 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 10:37:28.913808  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:37:28.931820  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:37:28.949028  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:37:28.966476  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:37:28.984047  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:37:29.002075  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:37:29.020305  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:37:29.037811  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:37:29.054630  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:37:29.071547  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 10:37:29.088633  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 10:37:29.105638  359214 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:37:29.118149  359214 ssh_runner.go:195] Run: openssl version
	I1213 10:37:29.124118  359214 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:37:29.131416  359214 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:37:29.138705  359214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:37:29.142329  359214 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:37:29.142388  359214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:37:29.183023  359214 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:37:29.190485  359214 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 10:37:29.197738  359214 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 10:37:29.205192  359214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 10:37:29.209070  359214 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 10:37:29.209124  359214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 10:37:29.250234  359214 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:37:29.257744  359214 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 10:37:29.265022  359214 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 10:37:29.272593  359214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 10:37:29.276820  359214 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 10:37:29.276874  359214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 10:37:29.317834  359214 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:37:29.325126  359214 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:37:29.328844  359214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:37:29.369639  359214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:37:29.410192  359214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:37:29.467336  359214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:37:29.508158  359214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:37:29.549013  359214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:37:29.589618  359214 kubeadm.go:401] StartCluster: {Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:37:29.589715  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 10:37:29.589775  359214 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:37:29.617382  359214 cri.go:89] found id: ""
	I1213 10:37:29.617441  359214 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:37:29.625150  359214 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:37:29.625165  359214 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:37:29.625217  359214 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:37:29.632536  359214 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:37:29.633037  359214 kubeconfig.go:125] found "functional-652709" server: "https://192.168.49.2:8441"
	I1213 10:37:29.635539  359214 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:37:29.643331  359214 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 10:22:52.033435592 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 10:37:28.181843120 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 10:37:29.643344  359214 kubeadm.go:1161] stopping kube-system containers ...
	I1213 10:37:29.643355  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1213 10:37:29.643418  359214 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:37:29.681117  359214 cri.go:89] found id: ""
	I1213 10:37:29.681185  359214 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 10:37:29.700348  359214 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:37:29.708464  359214 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 13 10:26 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 13 10:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 13 10:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 13 10:27 /etc/kubernetes/scheduler.conf
	
	I1213 10:37:29.708519  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:37:29.716973  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:37:29.724972  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:37:29.725027  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:37:29.732670  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:37:29.740374  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:37:29.740426  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:37:29.747796  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:37:29.755836  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:37:29.755895  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:37:29.763121  359214 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:37:29.770676  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:37:29.815944  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:37:31.022963  359214 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.206994632s)
	I1213 10:37:31.023029  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:37:31.239388  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:37:31.313712  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:37:31.358670  359214 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:37:31.358755  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:31.859658  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:32.358989  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:32.859540  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:33.359279  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:33.859755  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:34.358874  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:34.859660  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:35.358974  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:35.859781  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:36.359545  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:36.858931  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:37.359594  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:37.858997  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:38.359204  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:38.858979  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:39.358917  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:39.859473  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:40.359538  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:40.859107  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:41.358909  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:41.859704  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:42.359845  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:42.858940  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:43.359903  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:43.859817  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:44.359835  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:44.859527  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:45.359678  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:45.859496  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:46.359291  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:46.858996  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:47.358908  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:47.859899  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:48.358923  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:48.859520  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:49.358971  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:49.859614  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:50.359594  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:50.859684  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:51.359555  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:51.859532  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:52.359643  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:52.858959  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:53.359880  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:53.859709  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:54.359771  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:54.859730  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:55.359785  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:55.858870  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:56.359649  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:56.858975  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:57.358923  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:57.858974  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:58.359777  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:58.859581  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:59.359156  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:59.858896  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:00.358974  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:00.859820  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:01.359786  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:01.858901  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:02.359740  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:02.858926  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:03.359018  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:03.859003  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:04.358882  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:04.859861  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:05.358860  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:05.859819  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:06.358836  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:06.859844  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:07.359700  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:07.859637  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:08.358985  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:08.859911  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:09.358995  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:09.859620  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:10.359502  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:10.859134  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:11.358958  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:11.859244  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:12.359094  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:12.858981  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:13.359211  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:13.859751  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:14.358846  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:14.859594  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:15.358998  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:15.859726  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:16.358944  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:16.859375  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:17.358986  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:17.859765  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:18.358918  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:18.859799  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:19.359117  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:19.859388  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:20.359631  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:20.858965  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:21.358912  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:21.858871  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:22.359799  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:22.859665  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:23.359516  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:23.859788  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:24.359018  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:24.858866  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:25.359003  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:25.859726  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:26.358952  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:26.859653  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:27.359769  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:27.859360  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:28.358958  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:28.859685  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:29.359809  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:29.859773  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:30.359871  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:30.859558  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:31.359176  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:31.359252  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:31.383827  359214 cri.go:89] found id: ""
	I1213 10:38:31.383841  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.383849  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:31.383855  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:31.383917  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:31.412267  359214 cri.go:89] found id: ""
	I1213 10:38:31.412291  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.412300  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:31.412305  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:31.412364  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:31.437736  359214 cri.go:89] found id: ""
	I1213 10:38:31.437751  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.437758  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:31.437763  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:31.437824  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:31.461791  359214 cri.go:89] found id: ""
	I1213 10:38:31.461806  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.461813  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:31.461818  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:31.461880  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:31.488695  359214 cri.go:89] found id: ""
	I1213 10:38:31.488709  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.488717  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:31.488722  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:31.488789  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:31.517230  359214 cri.go:89] found id: ""
	I1213 10:38:31.517245  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.517274  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:31.517281  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:31.517340  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:31.541920  359214 cri.go:89] found id: ""
	I1213 10:38:31.541934  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.541942  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:31.541951  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:31.541962  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:31.558143  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:31.558161  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:31.623427  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:31.614536   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.615101   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.616803   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.617190   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.619517   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:31.614536   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.615101   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.616803   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.617190   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.619517   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:31.623438  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:31.623449  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:31.686774  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:31.686794  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:31.719218  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:31.719234  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:34.280556  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:34.293171  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:34.293241  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:34.319161  359214 cri.go:89] found id: ""
	I1213 10:38:34.319176  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.319183  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:34.319189  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:34.319245  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:34.348792  359214 cri.go:89] found id: ""
	I1213 10:38:34.348806  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.348814  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:34.348819  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:34.348879  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:34.374794  359214 cri.go:89] found id: ""
	I1213 10:38:34.374809  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.374816  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:34.374822  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:34.374883  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:34.399481  359214 cri.go:89] found id: ""
	I1213 10:38:34.399496  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.399503  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:34.399509  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:34.399567  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:34.424169  359214 cri.go:89] found id: ""
	I1213 10:38:34.424184  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.424191  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:34.424196  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:34.424300  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:34.449747  359214 cri.go:89] found id: ""
	I1213 10:38:34.449762  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.449769  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:34.449775  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:34.449839  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:34.475244  359214 cri.go:89] found id: ""
	I1213 10:38:34.475259  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.475266  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:34.475274  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:34.475284  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:34.531644  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:34.531665  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:34.548876  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:34.548895  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:34.612831  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:34.605081   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.605477   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.607131   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.607458   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.609038   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:34.605081   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.605477   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.607131   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.607458   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.609038   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:34.612842  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:34.612853  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:34.677588  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:34.677607  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:37.204561  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:37.215900  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:37.215960  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:37.240644  359214 cri.go:89] found id: ""
	I1213 10:38:37.240679  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.240697  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:37.240710  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:37.240796  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:37.265154  359214 cri.go:89] found id: ""
	I1213 10:38:37.265168  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.265176  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:37.265181  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:37.265240  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:37.290309  359214 cri.go:89] found id: ""
	I1213 10:38:37.290323  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.290331  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:37.290336  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:37.290402  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:37.314207  359214 cri.go:89] found id: ""
	I1213 10:38:37.314222  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.314229  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:37.314235  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:37.314294  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:37.338622  359214 cri.go:89] found id: ""
	I1213 10:38:37.338637  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.338645  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:37.338651  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:37.338731  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:37.362866  359214 cri.go:89] found id: ""
	I1213 10:38:37.362881  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.362888  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:37.362894  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:37.362954  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:37.388313  359214 cri.go:89] found id: ""
	I1213 10:38:37.388327  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.388335  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:37.388343  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:37.388355  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:37.405018  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:37.405035  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:37.467928  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:37.459672   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.460192   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.461721   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.462120   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.463584   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:37.459672   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.460192   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.461721   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.462120   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.463584   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:37.467941  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:37.467952  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:37.536764  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:37.536793  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:37.565751  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:37.565767  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:40.124516  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:40.136075  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:40.136155  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:40.180740  359214 cri.go:89] found id: ""
	I1213 10:38:40.180755  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.180763  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:40.180771  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:40.180844  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:40.214880  359214 cri.go:89] found id: ""
	I1213 10:38:40.214894  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.214912  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:40.214918  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:40.214986  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:40.255502  359214 cri.go:89] found id: ""
	I1213 10:38:40.255516  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.255524  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:40.255529  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:40.255590  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:40.279736  359214 cri.go:89] found id: ""
	I1213 10:38:40.279750  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.279761  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:40.279766  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:40.279827  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:40.305162  359214 cri.go:89] found id: ""
	I1213 10:38:40.305186  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.305194  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:40.305199  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:40.305268  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:40.330075  359214 cri.go:89] found id: ""
	I1213 10:38:40.330089  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.330097  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:40.330103  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:40.330171  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:40.356608  359214 cri.go:89] found id: ""
	I1213 10:38:40.356623  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.356631  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:40.356639  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:40.356649  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:40.386833  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:40.386850  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:40.442503  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:40.442523  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:40.458859  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:40.458875  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:40.526393  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:40.517849   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.518498   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.520192   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.520775   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.522583   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:40.517849   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.518498   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.520192   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.520775   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.522583   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:40.526415  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:40.526425  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:43.093725  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:43.104280  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:43.104351  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:43.128552  359214 cri.go:89] found id: ""
	I1213 10:38:43.128566  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.128574  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:43.128579  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:43.128637  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:43.153838  359214 cri.go:89] found id: ""
	I1213 10:38:43.153853  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.153861  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:43.153866  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:43.153925  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:43.182604  359214 cri.go:89] found id: ""
	I1213 10:38:43.182617  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.182624  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:43.182631  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:43.182751  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:43.212454  359214 cri.go:89] found id: ""
	I1213 10:38:43.212481  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.212489  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:43.212501  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:43.212572  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:43.239973  359214 cri.go:89] found id: ""
	I1213 10:38:43.239987  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.240005  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:43.240011  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:43.240074  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:43.264733  359214 cri.go:89] found id: ""
	I1213 10:38:43.264748  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.264755  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:43.264767  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:43.264826  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:43.291333  359214 cri.go:89] found id: ""
	I1213 10:38:43.291347  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.291354  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:43.291362  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:43.291372  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:43.348037  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:43.348057  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:43.364359  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:43.364377  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:43.426788  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:43.418519   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.419245   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.420917   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.421479   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.423061   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:43.418519   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.419245   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.420917   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.421479   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.423061   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:43.426809  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:43.426819  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:43.492237  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:43.492258  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:46.019179  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:46.029376  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:46.029454  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:46.053215  359214 cri.go:89] found id: ""
	I1213 10:38:46.053229  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.053236  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:46.053242  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:46.053315  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:46.078867  359214 cri.go:89] found id: ""
	I1213 10:38:46.078882  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.078889  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:46.078895  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:46.078955  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:46.104476  359214 cri.go:89] found id: ""
	I1213 10:38:46.104490  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.104498  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:46.104503  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:46.104584  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:46.132735  359214 cri.go:89] found id: ""
	I1213 10:38:46.132750  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.132758  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:46.132763  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:46.132844  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:46.171837  359214 cri.go:89] found id: ""
	I1213 10:38:46.171852  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.171859  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:46.171865  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:46.171925  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:46.214470  359214 cri.go:89] found id: ""
	I1213 10:38:46.214484  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.214501  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:46.214508  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:46.214581  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:46.241616  359214 cri.go:89] found id: ""
	I1213 10:38:46.241631  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.241638  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:46.241646  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:46.241657  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:46.269691  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:46.269717  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:46.326434  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:46.326454  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:46.342808  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:46.342825  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:46.406446  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:46.398462   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.399218   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.400888   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.401204   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.402682   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:46.398462   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.399218   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.400888   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.401204   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.402682   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:46.406456  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:46.406466  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:48.970215  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:48.980360  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:48.980424  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:49.007836  359214 cri.go:89] found id: ""
	I1213 10:38:49.007857  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.007865  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:49.007870  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:49.007930  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:49.032102  359214 cri.go:89] found id: ""
	I1213 10:38:49.032116  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.032124  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:49.032129  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:49.032188  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:49.056548  359214 cri.go:89] found id: ""
	I1213 10:38:49.056562  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.056577  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:49.056582  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:49.056638  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:49.080172  359214 cri.go:89] found id: ""
	I1213 10:38:49.080186  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.080194  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:49.080199  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:49.080257  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:49.104358  359214 cri.go:89] found id: ""
	I1213 10:38:49.104372  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.104380  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:49.104385  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:49.104456  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:49.131026  359214 cri.go:89] found id: ""
	I1213 10:38:49.131041  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.131048  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:49.131054  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:49.131111  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:49.155850  359214 cri.go:89] found id: ""
	I1213 10:38:49.155865  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.155872  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:49.155881  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:49.155891  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:49.237398  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:49.228981   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.229481   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.231324   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.231926   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.233542   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:49.228981   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.229481   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.231324   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.231926   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.233542   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:49.237409  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:49.237422  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:49.300000  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:49.300020  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:49.330957  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:49.330973  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:49.392815  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:49.392834  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:51.909143  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:51.919406  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:51.919465  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:51.948136  359214 cri.go:89] found id: ""
	I1213 10:38:51.948150  359214 logs.go:282] 0 containers: []
	W1213 10:38:51.948157  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:51.948163  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:51.948221  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:51.972396  359214 cri.go:89] found id: ""
	I1213 10:38:51.972411  359214 logs.go:282] 0 containers: []
	W1213 10:38:51.972420  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:51.972424  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:51.972497  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:52.003416  359214 cri.go:89] found id: ""
	I1213 10:38:52.003433  359214 logs.go:282] 0 containers: []
	W1213 10:38:52.003442  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:52.003449  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:52.003533  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:52.031359  359214 cri.go:89] found id: ""
	I1213 10:38:52.031374  359214 logs.go:282] 0 containers: []
	W1213 10:38:52.031382  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:52.031387  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:52.031447  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:52.056514  359214 cri.go:89] found id: ""
	I1213 10:38:52.056529  359214 logs.go:282] 0 containers: []
	W1213 10:38:52.056536  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:52.056541  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:52.056619  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:52.085509  359214 cri.go:89] found id: ""
	I1213 10:38:52.085524  359214 logs.go:282] 0 containers: []
	W1213 10:38:52.085533  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:52.085539  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:52.085613  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:52.113117  359214 cri.go:89] found id: ""
	I1213 10:38:52.113131  359214 logs.go:282] 0 containers: []
	W1213 10:38:52.113138  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:52.113146  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:52.113157  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:52.129605  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:52.129627  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:52.198531  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:52.190917   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.191383   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.192873   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.193169   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.194579   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:52.190917   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.191383   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.192873   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.193169   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.194579   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:52.198542  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:52.198554  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:52.267617  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:52.267640  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:52.301362  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:52.301379  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:54.858319  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:54.868860  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:54.868931  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:54.895935  359214 cri.go:89] found id: ""
	I1213 10:38:54.895949  359214 logs.go:282] 0 containers: []
	W1213 10:38:54.895956  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:54.895962  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:54.896020  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:54.924712  359214 cri.go:89] found id: ""
	I1213 10:38:54.924727  359214 logs.go:282] 0 containers: []
	W1213 10:38:54.924734  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:54.924740  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:54.924807  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:54.949662  359214 cri.go:89] found id: ""
	I1213 10:38:54.949677  359214 logs.go:282] 0 containers: []
	W1213 10:38:54.949685  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:54.949690  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:54.949758  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:54.973861  359214 cri.go:89] found id: ""
	I1213 10:38:54.973876  359214 logs.go:282] 0 containers: []
	W1213 10:38:54.973883  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:54.973889  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:54.973949  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:54.999167  359214 cri.go:89] found id: ""
	I1213 10:38:54.999182  359214 logs.go:282] 0 containers: []
	W1213 10:38:54.999190  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:54.999196  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:54.999267  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:55.030614  359214 cri.go:89] found id: ""
	I1213 10:38:55.030630  359214 logs.go:282] 0 containers: []
	W1213 10:38:55.030638  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:55.030644  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:55.030764  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:55.059903  359214 cri.go:89] found id: ""
	I1213 10:38:55.059918  359214 logs.go:282] 0 containers: []
	W1213 10:38:55.059925  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:55.059933  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:55.059943  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:55.129097  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:55.129156  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:55.157699  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:55.157717  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:55.226688  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:55.226706  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:55.244093  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:55.244111  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:55.309464  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:55.300977   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.301803   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.303423   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.304086   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.305672   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:55.300977   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.301803   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.303423   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.304086   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.305672   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:57.809736  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:57.819959  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:57.820025  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:57.844184  359214 cri.go:89] found id: ""
	I1213 10:38:57.844198  359214 logs.go:282] 0 containers: []
	W1213 10:38:57.844206  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:57.844211  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:57.844270  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:57.869511  359214 cri.go:89] found id: ""
	I1213 10:38:57.869524  359214 logs.go:282] 0 containers: []
	W1213 10:38:57.869532  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:57.869553  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:57.869613  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:57.895212  359214 cri.go:89] found id: ""
	I1213 10:38:57.895226  359214 logs.go:282] 0 containers: []
	W1213 10:38:57.895234  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:57.895239  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:57.895298  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:57.919989  359214 cri.go:89] found id: ""
	I1213 10:38:57.920004  359214 logs.go:282] 0 containers: []
	W1213 10:38:57.920011  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:57.920018  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:57.920076  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:57.948250  359214 cri.go:89] found id: ""
	I1213 10:38:57.948263  359214 logs.go:282] 0 containers: []
	W1213 10:38:57.948271  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:57.948277  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:57.948334  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:57.974322  359214 cri.go:89] found id: ""
	I1213 10:38:57.974337  359214 logs.go:282] 0 containers: []
	W1213 10:38:57.974345  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:57.974350  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:57.974423  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:58.005721  359214 cri.go:89] found id: ""
	I1213 10:38:58.005737  359214 logs.go:282] 0 containers: []
	W1213 10:38:58.005747  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:58.005757  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:58.005768  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:58.064186  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:58.064207  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:58.080907  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:58.080924  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:58.146147  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:58.137210   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.137944   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.139692   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.140402   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.141981   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:58.137210   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.137944   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.139692   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.140402   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.141981   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:58.146159  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:58.146170  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:58.214235  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:58.214253  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:00.744729  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:00.755028  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:00.755086  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:00.780193  359214 cri.go:89] found id: ""
	I1213 10:39:00.780207  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.780215  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:00.780221  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:00.780293  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:00.806094  359214 cri.go:89] found id: ""
	I1213 10:39:00.806109  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.806116  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:00.806123  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:00.806190  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:00.830215  359214 cri.go:89] found id: ""
	I1213 10:39:00.830229  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.830236  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:00.830241  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:00.830298  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:00.858553  359214 cri.go:89] found id: ""
	I1213 10:39:00.858567  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.858575  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:00.858581  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:00.858638  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:00.883276  359214 cri.go:89] found id: ""
	I1213 10:39:00.883290  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.883298  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:00.883304  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:00.883366  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:00.908199  359214 cri.go:89] found id: ""
	I1213 10:39:00.908214  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.908222  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:00.908235  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:00.908292  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:00.933487  359214 cri.go:89] found id: ""
	I1213 10:39:00.933502  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.933510  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:00.933518  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:00.933529  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:00.999819  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:00.990764   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.991604   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.993277   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.993599   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.995238   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:00.990764   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.991604   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.993277   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.993599   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.995238   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:00.999831  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:00.999851  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:01.070347  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:01.070376  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:01.099348  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:01.099367  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:01.160766  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:01.160789  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:03.683134  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:03.693419  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:03.693479  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:03.724358  359214 cri.go:89] found id: ""
	I1213 10:39:03.724373  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.724380  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:03.724386  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:03.724446  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:03.749342  359214 cri.go:89] found id: ""
	I1213 10:39:03.749357  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.749365  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:03.749370  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:03.749428  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:03.777066  359214 cri.go:89] found id: ""
	I1213 10:39:03.777081  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.777088  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:03.777094  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:03.777153  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:03.802375  359214 cri.go:89] found id: ""
	I1213 10:39:03.802390  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.802397  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:03.802405  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:03.802463  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:03.828597  359214 cri.go:89] found id: ""
	I1213 10:39:03.828613  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.828620  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:03.828626  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:03.828688  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:03.854166  359214 cri.go:89] found id: ""
	I1213 10:39:03.854187  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.854195  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:03.854201  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:03.854261  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:03.879516  359214 cri.go:89] found id: ""
	I1213 10:39:03.879533  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.879540  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:03.879549  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:03.879559  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:03.936679  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:03.936700  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:03.953300  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:03.953317  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:04.029874  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:04.020037   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.021068   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.022008   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.023857   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.024567   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:04.020037   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.021068   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.022008   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.023857   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.024567   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:04.029886  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:04.029896  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:04.097622  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:04.097643  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:06.630848  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:06.641568  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:06.641629  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:06.667996  359214 cri.go:89] found id: ""
	I1213 10:39:06.668011  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.668019  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:06.668024  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:06.668090  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:06.697263  359214 cri.go:89] found id: ""
	I1213 10:39:06.697278  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.697293  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:06.697299  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:06.697359  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:06.722757  359214 cri.go:89] found id: ""
	I1213 10:39:06.722772  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.722780  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:06.722785  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:06.722844  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:06.746758  359214 cri.go:89] found id: ""
	I1213 10:39:06.746772  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.746780  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:06.746786  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:06.746845  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:06.775078  359214 cri.go:89] found id: ""
	I1213 10:39:06.775093  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.775100  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:06.775105  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:06.775164  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:06.800898  359214 cri.go:89] found id: ""
	I1213 10:39:06.800914  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.800921  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:06.800926  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:06.800983  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:06.829594  359214 cri.go:89] found id: ""
	I1213 10:39:06.829624  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.829648  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:06.829656  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:06.829666  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:06.893293  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:06.893314  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:06.921544  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:06.921562  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:06.981949  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:06.981969  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:06.998794  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:06.998816  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:07.067966  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:07.059691   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.060374   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.061914   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.062229   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.063682   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:07.059691   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.060374   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.061914   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.062229   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.063682   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:09.568245  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:09.578515  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:09.578574  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:09.604486  359214 cri.go:89] found id: ""
	I1213 10:39:09.604500  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.604507  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:09.604512  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:09.604572  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:09.628878  359214 cri.go:89] found id: ""
	I1213 10:39:09.628894  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.628902  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:09.628912  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:09.628971  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:09.654182  359214 cri.go:89] found id: ""
	I1213 10:39:09.654196  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.654204  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:09.654209  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:09.654268  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:09.679850  359214 cri.go:89] found id: ""
	I1213 10:39:09.679864  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.679871  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:09.679877  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:09.679937  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:09.708630  359214 cri.go:89] found id: ""
	I1213 10:39:09.708644  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.708651  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:09.708657  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:09.708716  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:09.732554  359214 cri.go:89] found id: ""
	I1213 10:39:09.732568  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.732575  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:09.732581  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:09.732642  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:09.757631  359214 cri.go:89] found id: ""
	I1213 10:39:09.757646  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.757654  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:09.757663  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:09.757674  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:09.816181  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:09.816203  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:09.832514  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:09.832531  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:09.897359  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:09.888543   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.889254   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.891102   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.891693   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.893450   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:09.888543   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.889254   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.891102   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.891693   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.893450   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:09.897369  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:09.897379  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:09.960943  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:09.960964  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:12.490984  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:12.501823  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:12.501893  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:12.532332  359214 cri.go:89] found id: ""
	I1213 10:39:12.532347  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.532354  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:12.532359  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:12.532419  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:12.558457  359214 cri.go:89] found id: ""
	I1213 10:39:12.558471  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.558479  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:12.558485  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:12.558545  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:12.585075  359214 cri.go:89] found id: ""
	I1213 10:39:12.585089  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.585097  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:12.585102  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:12.585160  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:12.614401  359214 cri.go:89] found id: ""
	I1213 10:39:12.614415  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.614422  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:12.614428  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:12.614486  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:12.639152  359214 cri.go:89] found id: ""
	I1213 10:39:12.639166  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.639173  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:12.639179  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:12.639240  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:12.667593  359214 cri.go:89] found id: ""
	I1213 10:39:12.667607  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.667614  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:12.667620  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:12.667681  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:12.691984  359214 cri.go:89] found id: ""
	I1213 10:39:12.691997  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.692005  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:12.692013  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:12.692024  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:12.756546  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:12.748299   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.748690   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.750244   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.750570   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.752183   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:12.748299   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.748690   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.750244   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.750570   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.752183   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:12.756556  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:12.756567  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:12.820864  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:12.820885  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:12.853253  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:12.853289  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:12.911659  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:12.911678  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:15.427988  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:15.439459  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:15.439523  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:15.476834  359214 cri.go:89] found id: ""
	I1213 10:39:15.476849  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.476856  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:15.476862  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:15.476926  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:15.501586  359214 cri.go:89] found id: ""
	I1213 10:39:15.501601  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.501609  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:15.501614  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:15.501675  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:15.526367  359214 cri.go:89] found id: ""
	I1213 10:39:15.526381  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.526399  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:15.526406  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:15.526473  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:15.551126  359214 cri.go:89] found id: ""
	I1213 10:39:15.551141  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.551148  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:15.551154  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:15.551209  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:15.576958  359214 cri.go:89] found id: ""
	I1213 10:39:15.576973  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.576990  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:15.576996  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:15.577062  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:15.601287  359214 cri.go:89] found id: ""
	I1213 10:39:15.601300  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.601308  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:15.601313  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:15.601371  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:15.628822  359214 cri.go:89] found id: ""
	I1213 10:39:15.628837  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.628844  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:15.628852  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:15.628862  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:15.644985  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:15.645002  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:15.711548  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:15.703095   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.703681   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.705285   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.705963   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.707559   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:15.703095   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.703681   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.705285   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.705963   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.707559   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:15.711559  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:15.711571  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:15.775011  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:15.775031  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:15.802522  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:15.802545  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:18.359921  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:18.369925  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:18.369992  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:18.393448  359214 cri.go:89] found id: ""
	I1213 10:39:18.393462  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.393470  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:18.393476  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:18.393532  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:18.426863  359214 cri.go:89] found id: ""
	I1213 10:39:18.426876  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.426884  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:18.426889  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:18.426946  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:18.472251  359214 cri.go:89] found id: ""
	I1213 10:39:18.472264  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.472272  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:18.472277  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:18.472333  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:18.500412  359214 cri.go:89] found id: ""
	I1213 10:39:18.500427  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.500434  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:18.500440  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:18.500500  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:18.524823  359214 cri.go:89] found id: ""
	I1213 10:39:18.524837  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.524845  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:18.524850  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:18.524908  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:18.549332  359214 cri.go:89] found id: ""
	I1213 10:39:18.549346  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.549354  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:18.549359  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:18.549417  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:18.577251  359214 cri.go:89] found id: ""
	I1213 10:39:18.577271  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.577279  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:18.577287  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:18.577299  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:18.639510  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:18.639530  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:18.677762  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:18.677777  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:18.737061  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:18.737080  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:18.753422  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:18.753439  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:18.823128  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:18.814301   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.815633   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.816172   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.817539   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.818059   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:18.814301   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.815633   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.816172   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.817539   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.818059   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:21.323418  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:21.333772  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:21.333833  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:21.368103  359214 cri.go:89] found id: ""
	I1213 10:39:21.368118  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.368125  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:21.368131  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:21.368188  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:21.392848  359214 cri.go:89] found id: ""
	I1213 10:39:21.392862  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.392870  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:21.392875  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:21.392932  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:21.426067  359214 cri.go:89] found id: ""
	I1213 10:39:21.426082  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.426089  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:21.426094  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:21.426153  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:21.453497  359214 cri.go:89] found id: ""
	I1213 10:39:21.453521  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.453529  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:21.453535  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:21.453600  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:21.486155  359214 cri.go:89] found id: ""
	I1213 10:39:21.486170  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.486187  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:21.486193  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:21.486262  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:21.512133  359214 cri.go:89] found id: ""
	I1213 10:39:21.512148  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.512155  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:21.512161  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:21.512219  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:21.536909  359214 cri.go:89] found id: ""
	I1213 10:39:21.536925  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.536932  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:21.536940  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:21.536951  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:21.564635  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:21.564651  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:21.621861  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:21.621882  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:21.638280  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:21.638297  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:21.706649  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:21.698160   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.698774   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.700554   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.701257   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.702523   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:21.698160   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.698774   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.700554   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.701257   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.702523   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:21.706660  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:21.706678  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:24.270851  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:24.281891  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:24.281959  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:24.306887  359214 cri.go:89] found id: ""
	I1213 10:39:24.306902  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.306910  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:24.306916  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:24.306989  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:24.330995  359214 cri.go:89] found id: ""
	I1213 10:39:24.331009  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.331018  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:24.331023  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:24.331079  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:24.358824  359214 cri.go:89] found id: ""
	I1213 10:39:24.358838  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.358845  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:24.358850  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:24.358907  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:24.383545  359214 cri.go:89] found id: ""
	I1213 10:39:24.383559  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.383566  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:24.383572  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:24.383628  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:24.407288  359214 cri.go:89] found id: ""
	I1213 10:39:24.407302  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.407309  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:24.407315  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:24.407374  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:24.441689  359214 cri.go:89] found id: ""
	I1213 10:39:24.441703  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.441720  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:24.441727  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:24.441796  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:24.469372  359214 cri.go:89] found id: ""
	I1213 10:39:24.469387  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.469394  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:24.469402  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:24.469418  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:24.529071  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:24.529091  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:24.545770  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:24.545786  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:24.619385  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:24.610753   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.611526   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.613120   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.613552   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.615328   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:24.610753   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.611526   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.613120   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.613552   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.615328   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:24.619395  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:24.619406  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:24.683002  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:24.683029  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:27.214048  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:27.223825  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:27.223885  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:27.249091  359214 cri.go:89] found id: ""
	I1213 10:39:27.249106  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.249114  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:27.249120  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:27.249175  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:27.274216  359214 cri.go:89] found id: ""
	I1213 10:39:27.274231  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.274238  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:27.274243  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:27.274301  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:27.306051  359214 cri.go:89] found id: ""
	I1213 10:39:27.306068  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.306076  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:27.306081  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:27.306162  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:27.329993  359214 cri.go:89] found id: ""
	I1213 10:39:27.330015  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.330022  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:27.330027  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:27.330084  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:27.357738  359214 cri.go:89] found id: ""
	I1213 10:39:27.357759  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.357766  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:27.357772  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:27.357829  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:27.383932  359214 cri.go:89] found id: ""
	I1213 10:39:27.383948  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.383955  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:27.383960  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:27.384021  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:27.408273  359214 cri.go:89] found id: ""
	I1213 10:39:27.408298  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.408306  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:27.408314  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:27.408324  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:27.473400  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:27.473421  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:27.490562  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:27.490580  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:27.560540  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:27.551714   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.552445   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.554637   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.555366   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.556555   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:27.551714   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.552445   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.554637   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.555366   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.556555   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:27.560551  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:27.560562  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:27.623676  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:27.623700  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:30.153068  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:30.164672  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:30.164745  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:30.192223  359214 cri.go:89] found id: ""
	I1213 10:39:30.192239  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.192248  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:30.192254  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:30.192336  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:30.224222  359214 cri.go:89] found id: ""
	I1213 10:39:30.224237  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.224245  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:30.224251  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:30.224319  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:30.250132  359214 cri.go:89] found id: ""
	I1213 10:39:30.250148  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.250156  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:30.250161  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:30.250232  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:30.278166  359214 cri.go:89] found id: ""
	I1213 10:39:30.278182  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.278199  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:30.278205  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:30.278271  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:30.304028  359214 cri.go:89] found id: ""
	I1213 10:39:30.304043  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.304050  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:30.304055  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:30.304112  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:30.328660  359214 cri.go:89] found id: ""
	I1213 10:39:30.328675  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.328693  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:30.328699  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:30.328767  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:30.352850  359214 cri.go:89] found id: ""
	I1213 10:39:30.352865  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.352877  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:30.352886  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:30.352896  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:30.408893  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:30.408912  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:30.428762  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:30.428779  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:30.500428  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:30.492113   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.492871   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.494609   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.495292   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.496285   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:30.492113   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.492871   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.494609   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.495292   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.496285   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:30.500438  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:30.500449  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:30.563541  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:30.563560  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:33.092955  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:33.103393  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:33.103457  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:33.128626  359214 cri.go:89] found id: ""
	I1213 10:39:33.128640  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.128647  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:33.128653  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:33.128709  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:33.156533  359214 cri.go:89] found id: ""
	I1213 10:39:33.156548  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.156555  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:33.156561  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:33.156631  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:33.181965  359214 cri.go:89] found id: ""
	I1213 10:39:33.181979  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.181987  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:33.181992  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:33.182066  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:33.210753  359214 cri.go:89] found id: ""
	I1213 10:39:33.210767  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.210775  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:33.210780  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:33.210846  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:33.236369  359214 cri.go:89] found id: ""
	I1213 10:39:33.236384  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.236391  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:33.236396  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:33.236453  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:33.261374  359214 cri.go:89] found id: ""
	I1213 10:39:33.261390  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.261397  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:33.261403  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:33.261476  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:33.286480  359214 cri.go:89] found id: ""
	I1213 10:39:33.286496  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.286512  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:33.286536  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:33.286547  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:33.344247  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:33.344268  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:33.362163  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:33.362178  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:33.431331  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:33.423097   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.423938   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.425571   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.425890   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.427375   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:33.423097   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.423938   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.425571   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.425890   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.427375   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:33.431340  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:33.431351  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:33.514221  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:33.514250  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:36.043055  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:36.053301  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:36.053366  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:36.078047  359214 cri.go:89] found id: ""
	I1213 10:39:36.078061  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.078069  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:36.078074  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:36.078135  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:36.104994  359214 cri.go:89] found id: ""
	I1213 10:39:36.105009  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.105017  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:36.105022  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:36.105083  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:36.138243  359214 cri.go:89] found id: ""
	I1213 10:39:36.138257  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.138264  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:36.138270  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:36.138331  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:36.163657  359214 cri.go:89] found id: ""
	I1213 10:39:36.163672  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.163679  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:36.163685  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:36.163744  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:36.192631  359214 cri.go:89] found id: ""
	I1213 10:39:36.192646  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.192653  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:36.192658  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:36.192715  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:36.217613  359214 cri.go:89] found id: ""
	I1213 10:39:36.217626  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.217634  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:36.217641  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:36.217699  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:36.242973  359214 cri.go:89] found id: ""
	I1213 10:39:36.242988  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.242995  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:36.243004  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:36.243015  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:36.299822  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:36.299843  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:36.316930  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:36.316947  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:36.384839  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:36.376386   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.377339   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.379075   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.379670   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.381017   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:36.376386   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.377339   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.379075   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.379670   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.381017   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:36.384850  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:36.384860  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:36.453800  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:36.453820  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:38.992805  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:39.004323  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:39.004395  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:39.029542  359214 cri.go:89] found id: ""
	I1213 10:39:39.029556  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.029564  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:39.029569  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:39.029634  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:39.058191  359214 cri.go:89] found id: ""
	I1213 10:39:39.058205  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.058212  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:39.058217  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:39.058278  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:39.082506  359214 cri.go:89] found id: ""
	I1213 10:39:39.082520  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.082527  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:39.082532  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:39.082588  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:39.107708  359214 cri.go:89] found id: ""
	I1213 10:39:39.107722  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.107729  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:39.107735  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:39.107795  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:39.134092  359214 cri.go:89] found id: ""
	I1213 10:39:39.134106  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.134114  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:39.134119  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:39.134176  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:39.159493  359214 cri.go:89] found id: ""
	I1213 10:39:39.159508  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.159516  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:39.159521  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:39.159586  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:39.185250  359214 cri.go:89] found id: ""
	I1213 10:39:39.185270  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.185278  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:39.185285  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:39.185296  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:39.212945  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:39.212964  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:39.270421  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:39.270441  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:39.287465  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:39.287483  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:39.353697  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:39.344780   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.345537   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.347125   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.347630   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.349255   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:39.344780   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.345537   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.347125   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.347630   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.349255   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:39.353707  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:39.353719  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:41.923052  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:41.933314  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:41.933380  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:41.957979  359214 cri.go:89] found id: ""
	I1213 10:39:41.957994  359214 logs.go:282] 0 containers: []
	W1213 10:39:41.958001  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:41.958006  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:41.958063  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:41.982504  359214 cri.go:89] found id: ""
	I1213 10:39:41.982519  359214 logs.go:282] 0 containers: []
	W1213 10:39:41.982527  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:41.982532  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:41.982594  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:42.034066  359214 cri.go:89] found id: ""
	I1213 10:39:42.034090  359214 logs.go:282] 0 containers: []
	W1213 10:39:42.034098  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:42.034103  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:42.034170  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:42.060660  359214 cri.go:89] found id: ""
	I1213 10:39:42.060675  359214 logs.go:282] 0 containers: []
	W1213 10:39:42.060682  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:42.060688  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:42.060760  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:42.089100  359214 cri.go:89] found id: ""
	I1213 10:39:42.089116  359214 logs.go:282] 0 containers: []
	W1213 10:39:42.089125  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:42.089131  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:42.089206  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:42.124357  359214 cri.go:89] found id: ""
	I1213 10:39:42.124373  359214 logs.go:282] 0 containers: []
	W1213 10:39:42.124382  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:42.124388  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:42.124457  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:42.154537  359214 cri.go:89] found id: ""
	I1213 10:39:42.154552  359214 logs.go:282] 0 containers: []
	W1213 10:39:42.154560  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:42.154568  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:42.154580  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:42.236098  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:42.226374   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.227371   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.229046   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.229696   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.231326   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:42.226374   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.227371   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.229046   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.229696   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.231326   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:42.236116  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:42.236128  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:42.301179  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:42.301201  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:42.331860  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:42.331876  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:42.389580  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:42.389599  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:44.907943  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:44.917971  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:44.918030  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:44.944860  359214 cri.go:89] found id: ""
	I1213 10:39:44.944876  359214 logs.go:282] 0 containers: []
	W1213 10:39:44.944883  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:44.944889  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:44.944947  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:44.969171  359214 cri.go:89] found id: ""
	I1213 10:39:44.969185  359214 logs.go:282] 0 containers: []
	W1213 10:39:44.969192  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:44.969197  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:44.969274  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:44.993953  359214 cri.go:89] found id: ""
	I1213 10:39:44.993968  359214 logs.go:282] 0 containers: []
	W1213 10:39:44.993975  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:44.993980  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:44.994036  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:45.047270  359214 cri.go:89] found id: ""
	I1213 10:39:45.047286  359214 logs.go:282] 0 containers: []
	W1213 10:39:45.047295  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:45.047308  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:45.047383  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:45.081157  359214 cri.go:89] found id: ""
	I1213 10:39:45.081173  359214 logs.go:282] 0 containers: []
	W1213 10:39:45.081182  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:45.081189  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:45.081275  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:45.121621  359214 cri.go:89] found id: ""
	I1213 10:39:45.121638  359214 logs.go:282] 0 containers: []
	W1213 10:39:45.121646  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:45.121652  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:45.121723  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:45.178070  359214 cri.go:89] found id: ""
	I1213 10:39:45.178087  359214 logs.go:282] 0 containers: []
	W1213 10:39:45.178095  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:45.178105  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:45.178117  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:45.242653  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:45.242715  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:45.312989  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:45.313030  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:45.333875  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:45.333893  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:45.402702  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:45.394811   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.395310   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.396989   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.397342   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.398810   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:45.394811   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.395310   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.396989   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.397342   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.398810   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:45.402713  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:45.402724  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:47.974092  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:47.984508  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:47.984581  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:48.011411  359214 cri.go:89] found id: ""
	I1213 10:39:48.011427  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.011434  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:48.011440  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:48.011500  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:48.037430  359214 cri.go:89] found id: ""
	I1213 10:39:48.037445  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.037464  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:48.037470  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:48.037541  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:48.068968  359214 cri.go:89] found id: ""
	I1213 10:39:48.068982  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.068989  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:48.068994  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:48.069053  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:48.093935  359214 cri.go:89] found id: ""
	I1213 10:39:48.093949  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.093966  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:48.093982  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:48.094054  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:48.118617  359214 cri.go:89] found id: ""
	I1213 10:39:48.118631  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.118647  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:48.118653  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:48.118742  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:48.147778  359214 cri.go:89] found id: ""
	I1213 10:39:48.147792  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.147802  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:48.147807  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:48.147866  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:48.171531  359214 cri.go:89] found id: ""
	I1213 10:39:48.171546  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.171553  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:48.171562  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:48.171572  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:48.228511  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:48.228531  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:48.244723  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:48.244738  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:48.313285  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:48.305099   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.305923   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.307541   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.308109   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.309223   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:48.305099   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.305923   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.307541   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.308109   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.309223   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:48.313296  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:48.313307  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:48.374383  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:48.374405  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:50.902721  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:50.912675  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:50.912735  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:50.936964  359214 cri.go:89] found id: ""
	I1213 10:39:50.936978  359214 logs.go:282] 0 containers: []
	W1213 10:39:50.936986  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:50.936991  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:50.937050  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:50.960978  359214 cri.go:89] found id: ""
	I1213 10:39:50.960991  359214 logs.go:282] 0 containers: []
	W1213 10:39:50.960999  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:50.961004  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:50.961060  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:50.985441  359214 cri.go:89] found id: ""
	I1213 10:39:50.985455  359214 logs.go:282] 0 containers: []
	W1213 10:39:50.985462  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:50.985467  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:50.985524  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:51.012305  359214 cri.go:89] found id: ""
	I1213 10:39:51.012320  359214 logs.go:282] 0 containers: []
	W1213 10:39:51.012327  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:51.012333  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:51.012394  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:51.037844  359214 cri.go:89] found id: ""
	I1213 10:39:51.037858  359214 logs.go:282] 0 containers: []
	W1213 10:39:51.037865  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:51.037871  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:51.037930  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:51.062094  359214 cri.go:89] found id: ""
	I1213 10:39:51.062108  359214 logs.go:282] 0 containers: []
	W1213 10:39:51.062115  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:51.062121  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:51.062178  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:51.087816  359214 cri.go:89] found id: ""
	I1213 10:39:51.087831  359214 logs.go:282] 0 containers: []
	W1213 10:39:51.087839  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:51.087848  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:51.087860  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:51.144441  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:51.144462  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:51.161532  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:51.161551  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:51.232639  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:51.224130   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.224905   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.226632   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.227272   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.228817   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:51.224130   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.224905   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.226632   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.227272   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.228817   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:51.232650  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:51.232662  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:51.300854  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:51.300877  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:53.830183  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:53.840765  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:53.840829  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:53.867482  359214 cri.go:89] found id: ""
	I1213 10:39:53.867497  359214 logs.go:282] 0 containers: []
	W1213 10:39:53.867504  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:53.867510  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:53.867572  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:53.896830  359214 cri.go:89] found id: ""
	I1213 10:39:53.896844  359214 logs.go:282] 0 containers: []
	W1213 10:39:53.896852  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:53.896857  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:53.896921  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:53.921163  359214 cri.go:89] found id: ""
	I1213 10:39:53.921177  359214 logs.go:282] 0 containers: []
	W1213 10:39:53.921185  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:53.921190  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:53.921247  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:53.947006  359214 cri.go:89] found id: ""
	I1213 10:39:53.947020  359214 logs.go:282] 0 containers: []
	W1213 10:39:53.947027  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:53.947033  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:53.947089  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:53.971965  359214 cri.go:89] found id: ""
	I1213 10:39:53.971979  359214 logs.go:282] 0 containers: []
	W1213 10:39:53.971986  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:53.971992  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:53.972050  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:53.996770  359214 cri.go:89] found id: ""
	I1213 10:39:53.996785  359214 logs.go:282] 0 containers: []
	W1213 10:39:53.996792  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:53.996797  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:53.996856  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:54.029511  359214 cri.go:89] found id: ""
	I1213 10:39:54.029526  359214 logs.go:282] 0 containers: []
	W1213 10:39:54.029534  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:54.029542  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:54.029553  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:54.063523  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:54.063540  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:54.120600  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:54.120624  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:54.136821  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:54.136839  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:54.210067  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:54.201894   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.202593   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.204122   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.204658   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.206168   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:54.201894   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.202593   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.204122   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.204658   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.206168   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:54.210077  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:54.210087  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:56.773483  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:56.783689  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:56.783766  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:56.808277  359214 cri.go:89] found id: ""
	I1213 10:39:56.808291  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.808299  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:56.808304  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:56.808368  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:56.832949  359214 cri.go:89] found id: ""
	I1213 10:39:56.832963  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.832970  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:56.832976  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:56.833036  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:56.858222  359214 cri.go:89] found id: ""
	I1213 10:39:56.858236  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.858250  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:56.858255  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:56.858313  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:56.886516  359214 cri.go:89] found id: ""
	I1213 10:39:56.886531  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.886538  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:56.886543  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:56.886599  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:56.916534  359214 cri.go:89] found id: ""
	I1213 10:39:56.916548  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.916554  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:56.916560  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:56.916620  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:56.941364  359214 cri.go:89] found id: ""
	I1213 10:39:56.941379  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.941391  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:56.941397  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:56.941458  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:56.965977  359214 cri.go:89] found id: ""
	I1213 10:39:56.965991  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.965998  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:56.966006  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:56.966017  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:57.022046  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:57.022066  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:57.038754  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:57.038773  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:57.104023  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:57.095403   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.096172   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.097756   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.098390   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.100006   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:57.095403   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.096172   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.097756   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.098390   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.100006   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:57.104033  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:57.104043  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:57.164889  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:57.164909  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:59.697427  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:59.709225  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:59.709293  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:59.736814  359214 cri.go:89] found id: ""
	I1213 10:39:59.736828  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.736835  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:59.736840  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:59.736897  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:59.765228  359214 cri.go:89] found id: ""
	I1213 10:39:59.765243  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.765250  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:59.765255  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:59.765321  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:59.790792  359214 cri.go:89] found id: ""
	I1213 10:39:59.790807  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.790814  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:59.790819  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:59.790877  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:59.817123  359214 cri.go:89] found id: ""
	I1213 10:39:59.817137  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.817149  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:59.817161  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:59.817225  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:59.842465  359214 cri.go:89] found id: ""
	I1213 10:39:59.842480  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.842488  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:59.842493  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:59.842557  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:59.871828  359214 cri.go:89] found id: ""
	I1213 10:39:59.871842  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.871859  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:59.871865  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:59.871921  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:59.895975  359214 cri.go:89] found id: ""
	I1213 10:39:59.895989  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.895996  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:59.896004  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:59.896014  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:59.953038  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:59.953058  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:59.970121  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:59.970140  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:00.112897  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:00.082810   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.086414   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.089161   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.089674   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.099187   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:00.082810   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.086414   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.089161   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.089674   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.099187   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:00.112910  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:00.112922  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:00.251770  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:00.251795  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:02.813529  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:02.825083  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:02.825143  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:02.849893  359214 cri.go:89] found id: ""
	I1213 10:40:02.849907  359214 logs.go:282] 0 containers: []
	W1213 10:40:02.849915  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:02.849920  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:02.849979  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:02.876288  359214 cri.go:89] found id: ""
	I1213 10:40:02.876303  359214 logs.go:282] 0 containers: []
	W1213 10:40:02.876311  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:02.876316  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:02.876376  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:02.900996  359214 cri.go:89] found id: ""
	I1213 10:40:02.901011  359214 logs.go:282] 0 containers: []
	W1213 10:40:02.901018  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:02.901023  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:02.901085  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:02.941121  359214 cri.go:89] found id: ""
	I1213 10:40:02.941135  359214 logs.go:282] 0 containers: []
	W1213 10:40:02.941142  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:02.941148  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:02.941212  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:02.977122  359214 cri.go:89] found id: ""
	I1213 10:40:02.977137  359214 logs.go:282] 0 containers: []
	W1213 10:40:02.977145  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:02.977151  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:02.977211  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:03.007614  359214 cri.go:89] found id: ""
	I1213 10:40:03.007631  359214 logs.go:282] 0 containers: []
	W1213 10:40:03.007638  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:03.007644  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:03.007712  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:03.035112  359214 cri.go:89] found id: ""
	I1213 10:40:03.035128  359214 logs.go:282] 0 containers: []
	W1213 10:40:03.035135  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:03.035143  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:03.035153  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:03.092346  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:03.092365  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:03.109513  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:03.109531  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:03.178080  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:03.169681   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.170216   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.171843   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.172389   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.174013   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:03.169681   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.170216   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.171843   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.172389   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.174013   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:03.178092  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:03.178103  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:03.240824  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:03.240843  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:05.775438  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:05.785647  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:05.785707  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:05.809484  359214 cri.go:89] found id: ""
	I1213 10:40:05.809497  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.809505  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:05.809510  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:05.809569  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:05.834754  359214 cri.go:89] found id: ""
	I1213 10:40:05.834769  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.834777  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:05.834782  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:05.834844  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:05.858984  359214 cri.go:89] found id: ""
	I1213 10:40:05.858999  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.859006  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:05.859011  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:05.859072  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:05.884414  359214 cri.go:89] found id: ""
	I1213 10:40:05.884429  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.884436  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:05.884442  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:05.884504  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:05.918776  359214 cri.go:89] found id: ""
	I1213 10:40:05.918799  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.918807  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:05.918812  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:05.918880  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:05.963307  359214 cri.go:89] found id: ""
	I1213 10:40:05.963331  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.963340  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:05.963346  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:05.963414  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:05.989236  359214 cri.go:89] found id: ""
	I1213 10:40:05.989252  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.989260  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:05.989274  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:05.989284  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:06.046789  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:06.046809  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:06.063391  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:06.063408  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:06.133569  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:06.125185   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.125769   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.127395   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.128000   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.129671   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:06.125185   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.125769   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.127395   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.128000   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.129671   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:06.133579  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:06.133590  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:06.199358  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:06.199385  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:08.731038  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:08.741608  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:08.741668  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:08.770775  359214 cri.go:89] found id: ""
	I1213 10:40:08.770798  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.770806  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:08.770812  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:08.770880  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:08.795812  359214 cri.go:89] found id: ""
	I1213 10:40:08.795826  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.795834  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:08.795839  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:08.795900  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:08.821389  359214 cri.go:89] found id: ""
	I1213 10:40:08.821405  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.821415  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:08.821420  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:08.821484  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:08.847242  359214 cri.go:89] found id: ""
	I1213 10:40:08.847256  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.847265  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:08.847271  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:08.847337  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:08.873913  359214 cri.go:89] found id: ""
	I1213 10:40:08.873927  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.873935  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:08.873940  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:08.874003  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:08.898969  359214 cri.go:89] found id: ""
	I1213 10:40:08.898983  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.898990  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:08.898997  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:08.899063  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:08.936984  359214 cri.go:89] found id: ""
	I1213 10:40:08.936999  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.937006  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:08.937015  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:08.937026  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:09.003459  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:09.003483  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:09.022648  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:09.022673  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:09.089911  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:09.081728   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.082500   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.083990   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.084516   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.086022   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:09.081728   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.082500   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.083990   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.084516   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.086022   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:09.089922  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:09.089934  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:09.152235  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:09.152255  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:11.681167  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:11.691399  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:11.691463  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:11.720896  359214 cri.go:89] found id: ""
	I1213 10:40:11.720910  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.720918  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:11.720924  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:11.720987  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:11.746089  359214 cri.go:89] found id: ""
	I1213 10:40:11.746103  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.746111  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:11.746117  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:11.746176  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:11.770642  359214 cri.go:89] found id: ""
	I1213 10:40:11.770657  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.770664  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:11.770670  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:11.770759  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:11.798877  359214 cri.go:89] found id: ""
	I1213 10:40:11.798891  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.798900  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:11.798905  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:11.798965  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:11.824512  359214 cri.go:89] found id: ""
	I1213 10:40:11.824526  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.824534  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:11.824539  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:11.824596  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:11.849644  359214 cri.go:89] found id: ""
	I1213 10:40:11.849658  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.849665  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:11.849671  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:11.849728  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:11.878171  359214 cri.go:89] found id: ""
	I1213 10:40:11.878185  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.878192  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:11.878201  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:11.878213  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:11.942012  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:11.942033  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:11.973830  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:11.973849  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:12.038115  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:12.038135  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:12.055328  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:12.055345  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:12.122312  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:12.113825   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.114885   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.116494   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.116834   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.118378   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:12.113825   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.114885   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.116494   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.116834   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.118378   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:14.622545  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:14.632872  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:14.632931  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:14.660285  359214 cri.go:89] found id: ""
	I1213 10:40:14.660300  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.660308  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:14.660313  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:14.660370  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:14.686341  359214 cri.go:89] found id: ""
	I1213 10:40:14.686355  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.686362  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:14.686368  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:14.686427  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:14.710306  359214 cri.go:89] found id: ""
	I1213 10:40:14.710321  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.710328  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:14.710334  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:14.710392  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:14.736823  359214 cri.go:89] found id: ""
	I1213 10:40:14.736838  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.736846  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:14.736851  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:14.736909  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:14.761623  359214 cri.go:89] found id: ""
	I1213 10:40:14.761638  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.761645  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:14.761651  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:14.761710  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:14.786707  359214 cri.go:89] found id: ""
	I1213 10:40:14.786721  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.786729  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:14.786734  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:14.786795  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:14.816346  359214 cri.go:89] found id: ""
	I1213 10:40:14.816361  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.816368  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:14.816376  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:14.816386  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:14.877767  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:14.877786  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:14.914260  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:14.914277  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:14.980282  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:14.980303  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:14.996741  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:14.996760  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:15.099242  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:15.090567   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.091275   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.092910   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.093493   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.095098   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:15.090567   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.091275   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.092910   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.093493   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.095098   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:17.600882  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:17.611377  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:17.611437  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:17.639825  359214 cri.go:89] found id: ""
	I1213 10:40:17.639840  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.639847  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:17.639853  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:17.639912  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:17.664963  359214 cri.go:89] found id: ""
	I1213 10:40:17.664977  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.664985  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:17.664990  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:17.665052  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:17.690137  359214 cri.go:89] found id: ""
	I1213 10:40:17.690152  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.690159  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:17.690165  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:17.690230  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:17.715292  359214 cri.go:89] found id: ""
	I1213 10:40:17.715307  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.715315  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:17.715320  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:17.715382  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:17.744729  359214 cri.go:89] found id: ""
	I1213 10:40:17.744743  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.744750  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:17.744756  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:17.744815  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:17.772253  359214 cri.go:89] found id: ""
	I1213 10:40:17.772268  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.772276  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:17.772282  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:17.772348  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:17.797214  359214 cri.go:89] found id: ""
	I1213 10:40:17.797229  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.797237  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:17.797245  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:17.797255  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:17.852633  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:17.852653  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:17.869612  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:17.869633  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:17.936787  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:17.927568   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.928465   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.930186   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.930475   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.932615   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:17.927568   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.928465   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.930186   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.930475   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.932615   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:17.936804  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:17.936815  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:18.005630  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:18.005656  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:20.537348  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:20.547703  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:20.547778  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:20.572977  359214 cri.go:89] found id: ""
	I1213 10:40:20.572991  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.572998  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:20.573004  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:20.573062  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:20.602314  359214 cri.go:89] found id: ""
	I1213 10:40:20.602328  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.602335  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:20.602341  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:20.602397  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:20.627655  359214 cri.go:89] found id: ""
	I1213 10:40:20.627669  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.627686  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:20.627698  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:20.627767  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:20.655199  359214 cri.go:89] found id: ""
	I1213 10:40:20.655213  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.655220  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:20.655226  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:20.655291  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:20.682083  359214 cri.go:89] found id: ""
	I1213 10:40:20.682107  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.682115  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:20.682120  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:20.682189  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:20.707128  359214 cri.go:89] found id: ""
	I1213 10:40:20.707142  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.707150  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:20.707155  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:20.707213  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:20.732071  359214 cri.go:89] found id: ""
	I1213 10:40:20.732087  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.732094  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:20.732103  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:20.732112  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:20.797387  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:20.788274   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.789028   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.791053   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.791612   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.793250   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:20.788274   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.789028   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.791053   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.791612   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.793250   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:20.797397  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:20.797410  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:20.859451  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:20.859471  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:20.892801  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:20.892820  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:20.958351  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:20.958371  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:23.480839  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:23.491926  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:23.491987  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:23.518294  359214 cri.go:89] found id: ""
	I1213 10:40:23.518309  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.518317  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:23.518324  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:23.518385  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:23.545487  359214 cri.go:89] found id: ""
	I1213 10:40:23.545502  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.545509  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:23.545514  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:23.545584  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:23.571990  359214 cri.go:89] found id: ""
	I1213 10:40:23.572004  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.572012  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:23.572017  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:23.572080  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:23.599133  359214 cri.go:89] found id: ""
	I1213 10:40:23.599149  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.599157  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:23.599163  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:23.599223  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:23.626203  359214 cri.go:89] found id: ""
	I1213 10:40:23.626217  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.626225  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:23.626232  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:23.626296  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:23.653325  359214 cri.go:89] found id: ""
	I1213 10:40:23.653341  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.653349  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:23.653354  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:23.653423  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:23.688100  359214 cri.go:89] found id: ""
	I1213 10:40:23.688115  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.688123  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:23.688132  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:23.688141  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:23.750798  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:23.750818  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:23.781668  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:23.781685  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:23.839211  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:23.839231  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:23.856390  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:23.856414  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:23.924021  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:23.914017   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.914911   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.915850   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.917610   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.918368   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:23.914017   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.914911   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.915850   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.917610   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.918368   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:26.424278  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:26.434304  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:26.434366  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:26.460634  359214 cri.go:89] found id: ""
	I1213 10:40:26.460649  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.460657  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:26.460663  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:26.460723  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:26.485153  359214 cri.go:89] found id: ""
	I1213 10:40:26.485167  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.485175  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:26.485180  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:26.485238  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:26.514602  359214 cri.go:89] found id: ""
	I1213 10:40:26.514617  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.514624  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:26.514630  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:26.514715  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:26.539399  359214 cri.go:89] found id: ""
	I1213 10:40:26.539415  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.539422  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:26.539427  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:26.539489  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:26.564066  359214 cri.go:89] found id: ""
	I1213 10:40:26.564081  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.564088  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:26.564094  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:26.564158  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:26.595722  359214 cri.go:89] found id: ""
	I1213 10:40:26.595736  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.595744  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:26.595749  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:26.595808  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:26.621852  359214 cri.go:89] found id: ""
	I1213 10:40:26.621867  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.621875  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:26.621884  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:26.621894  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:26.678226  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:26.678245  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:26.694679  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:26.694762  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:26.760593  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:26.751702   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.752418   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.754240   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.754904   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.756624   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:26.751702   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.752418   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.754240   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.754904   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.756624   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:26.760604  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:26.760615  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:26.826139  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:26.826161  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:29.354247  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:29.364778  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:29.364838  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:29.391976  359214 cri.go:89] found id: ""
	I1213 10:40:29.391992  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.391999  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:29.392006  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:29.392065  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:29.420898  359214 cri.go:89] found id: ""
	I1213 10:40:29.420913  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.420920  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:29.420926  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:29.420995  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:29.445579  359214 cri.go:89] found id: ""
	I1213 10:40:29.445593  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.445601  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:29.445606  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:29.445669  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:29.470481  359214 cri.go:89] found id: ""
	I1213 10:40:29.470496  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.470504  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:29.470510  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:29.470571  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:29.494582  359214 cri.go:89] found id: ""
	I1213 10:40:29.494597  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.494605  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:29.494612  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:29.494672  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:29.520784  359214 cri.go:89] found id: ""
	I1213 10:40:29.520801  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.520810  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:29.520816  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:29.520879  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:29.546369  359214 cri.go:89] found id: ""
	I1213 10:40:29.546383  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.546390  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:29.546398  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:29.546410  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:29.607363  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:29.607383  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:29.641550  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:29.641568  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:29.700639  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:29.700662  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:29.717135  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:29.717152  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:29.786035  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:29.777828   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.778659   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.780297   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.780629   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.782173   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:29.777828   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.778659   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.780297   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.780629   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.782173   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:32.286874  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:32.297433  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:32.297493  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:32.326086  359214 cri.go:89] found id: ""
	I1213 10:40:32.326102  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.326109  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:32.326116  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:32.326172  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:32.359076  359214 cri.go:89] found id: ""
	I1213 10:40:32.359091  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.359098  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:32.359104  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:32.359170  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:32.384522  359214 cri.go:89] found id: ""
	I1213 10:40:32.384536  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.384544  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:32.384560  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:32.384659  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:32.410250  359214 cri.go:89] found id: ""
	I1213 10:40:32.410264  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.410272  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:32.410285  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:32.410348  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:32.435630  359214 cri.go:89] found id: ""
	I1213 10:40:32.435644  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.435651  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:32.435656  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:32.435714  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:32.463149  359214 cri.go:89] found id: ""
	I1213 10:40:32.463163  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.463171  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:32.463176  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:32.463242  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:32.487678  359214 cri.go:89] found id: ""
	I1213 10:40:32.487692  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.487700  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:32.487707  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:32.487716  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:32.550022  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:32.550044  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:32.583548  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:32.583564  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:32.640719  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:32.640741  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:32.658578  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:32.658596  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:32.723797  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:32.714586   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.715311   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.716834   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.717289   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.719662   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:32.714586   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.715311   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.716834   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.717289   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.719662   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:35.224914  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:35.236872  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:35.237012  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:35.268051  359214 cri.go:89] found id: ""
	I1213 10:40:35.268066  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.268073  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:35.268080  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:35.268145  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:35.295044  359214 cri.go:89] found id: ""
	I1213 10:40:35.295059  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.295068  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:35.295075  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:35.295135  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:35.325621  359214 cri.go:89] found id: ""
	I1213 10:40:35.325634  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.325642  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:35.325647  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:35.325710  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:35.351145  359214 cri.go:89] found id: ""
	I1213 10:40:35.351160  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.351168  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:35.351173  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:35.351232  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:35.376062  359214 cri.go:89] found id: ""
	I1213 10:40:35.376076  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.376083  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:35.376089  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:35.376145  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:35.400598  359214 cri.go:89] found id: ""
	I1213 10:40:35.400612  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.400619  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:35.400631  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:35.400688  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:35.425347  359214 cri.go:89] found id: ""
	I1213 10:40:35.425361  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.425368  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:35.425376  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:35.425387  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:35.487139  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:35.487160  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:35.514527  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:35.514544  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:35.571469  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:35.571489  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:35.590017  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:35.590034  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:35.658284  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:35.648682   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.650020   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.650936   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.652639   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.653357   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:35.648682   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.650020   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.650936   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.652639   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.653357   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:38.158809  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:38.173580  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:38.173664  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:38.205099  359214 cri.go:89] found id: ""
	I1213 10:40:38.205115  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.205122  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:38.205128  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:38.205185  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:38.230418  359214 cri.go:89] found id: ""
	I1213 10:40:38.230432  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.230439  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:38.230445  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:38.230503  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:38.255657  359214 cri.go:89] found id: ""
	I1213 10:40:38.255671  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.255679  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:38.255684  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:38.255743  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:38.284257  359214 cri.go:89] found id: ""
	I1213 10:40:38.284271  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.284279  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:38.284285  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:38.284343  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:38.310187  359214 cri.go:89] found id: ""
	I1213 10:40:38.310202  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.310209  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:38.310214  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:38.310272  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:38.334855  359214 cri.go:89] found id: ""
	I1213 10:40:38.334870  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.334878  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:38.334883  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:38.334943  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:38.364073  359214 cri.go:89] found id: ""
	I1213 10:40:38.364087  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.364095  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:38.364103  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:38.364114  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:38.380615  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:38.380633  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:38.445151  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:38.436629   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.437359   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.439007   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.439526   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.441205   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:38.436629   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.437359   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.439007   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.439526   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.441205   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:38.445161  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:38.445171  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:38.508000  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:38.508024  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:38.536010  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:38.536028  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:41.097145  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:41.107492  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:41.107560  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:41.133151  359214 cri.go:89] found id: ""
	I1213 10:40:41.133165  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.133173  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:41.133178  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:41.133239  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:41.158807  359214 cri.go:89] found id: ""
	I1213 10:40:41.158822  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.158830  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:41.158835  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:41.158900  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:41.186344  359214 cri.go:89] found id: ""
	I1213 10:40:41.186358  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.186366  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:41.186371  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:41.186432  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:41.212889  359214 cri.go:89] found id: ""
	I1213 10:40:41.212904  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.212911  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:41.212917  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:41.212976  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:41.238414  359214 cri.go:89] found id: ""
	I1213 10:40:41.238429  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.238437  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:41.238442  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:41.238509  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:41.265200  359214 cri.go:89] found id: ""
	I1213 10:40:41.265215  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.265222  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:41.265228  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:41.265299  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:41.293447  359214 cri.go:89] found id: ""
	I1213 10:40:41.293465  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.293473  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:41.293483  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:41.293539  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:41.357277  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:41.348095   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.348933   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.350453   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.350904   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.352722   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:41.348095   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.348933   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.350453   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.350904   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.352722   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:41.357289  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:41.357299  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:41.419746  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:41.419767  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:41.447382  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:41.447400  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:41.502410  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:41.502430  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:44.019462  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:44.030131  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:44.030195  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:44.063076  359214 cri.go:89] found id: ""
	I1213 10:40:44.063093  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.063102  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:44.063107  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:44.063171  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:44.087990  359214 cri.go:89] found id: ""
	I1213 10:40:44.088005  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.088012  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:44.088017  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:44.088077  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:44.116967  359214 cri.go:89] found id: ""
	I1213 10:40:44.116982  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.117000  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:44.117006  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:44.117075  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:44.144381  359214 cri.go:89] found id: ""
	I1213 10:40:44.144395  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.144403  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:44.144414  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:44.144475  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:44.176265  359214 cri.go:89] found id: ""
	I1213 10:40:44.176279  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.176286  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:44.176291  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:44.176349  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:44.204075  359214 cri.go:89] found id: ""
	I1213 10:40:44.204090  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.204097  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:44.204102  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:44.204159  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:44.235147  359214 cri.go:89] found id: ""
	I1213 10:40:44.235161  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.235169  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:44.235177  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:44.235187  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:44.290923  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:44.290942  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:44.307381  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:44.307398  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:44.371069  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:44.362628   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.363314   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.365045   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.365643   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.367260   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:44.362628   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.363314   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.365045   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.365643   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.367260   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:44.371080  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:44.371092  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:44.432736  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:44.432757  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:46.966048  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:46.976554  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:46.976616  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:47.009823  359214 cri.go:89] found id: ""
	I1213 10:40:47.009837  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.009845  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:47.009850  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:47.009912  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:47.035213  359214 cri.go:89] found id: ""
	I1213 10:40:47.035227  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.035234  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:47.035239  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:47.035300  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:47.060442  359214 cri.go:89] found id: ""
	I1213 10:40:47.060457  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.060465  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:47.060470  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:47.060527  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:47.084361  359214 cri.go:89] found id: ""
	I1213 10:40:47.084375  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.084383  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:47.084389  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:47.084453  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:47.109828  359214 cri.go:89] found id: ""
	I1213 10:40:47.109843  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.109850  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:47.109856  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:47.109920  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:47.138538  359214 cri.go:89] found id: ""
	I1213 10:40:47.138553  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.138561  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:47.138566  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:47.138623  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:47.173086  359214 cri.go:89] found id: ""
	I1213 10:40:47.173101  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.173108  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:47.173116  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:47.173125  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:47.230267  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:47.230285  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:47.247567  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:47.247584  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:47.313118  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:47.305055   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.305868   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.307513   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.307952   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.309445   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:47.305055   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.305868   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.307513   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.307952   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.309445   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:47.313128  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:47.313140  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:47.379486  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:47.379507  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:49.911610  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:49.921678  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:49.921738  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:49.945802  359214 cri.go:89] found id: ""
	I1213 10:40:49.945815  359214 logs.go:282] 0 containers: []
	W1213 10:40:49.945823  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:49.945828  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:49.945884  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:49.972021  359214 cri.go:89] found id: ""
	I1213 10:40:49.972036  359214 logs.go:282] 0 containers: []
	W1213 10:40:49.972043  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:49.972048  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:49.972104  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:49.995832  359214 cri.go:89] found id: ""
	I1213 10:40:49.995847  359214 logs.go:282] 0 containers: []
	W1213 10:40:49.995854  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:49.995859  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:49.995917  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:50.025400  359214 cri.go:89] found id: ""
	I1213 10:40:50.025416  359214 logs.go:282] 0 containers: []
	W1213 10:40:50.025424  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:50.025430  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:50.025488  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:50.052197  359214 cri.go:89] found id: ""
	I1213 10:40:50.052213  359214 logs.go:282] 0 containers: []
	W1213 10:40:50.052222  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:50.052229  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:50.052290  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:50.079760  359214 cri.go:89] found id: ""
	I1213 10:40:50.079774  359214 logs.go:282] 0 containers: []
	W1213 10:40:50.079782  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:50.079788  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:50.079849  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:50.109349  359214 cri.go:89] found id: ""
	I1213 10:40:50.109364  359214 logs.go:282] 0 containers: []
	W1213 10:40:50.109372  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:50.109380  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:50.109390  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:50.165908  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:50.165929  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:50.184199  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:50.184216  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:50.252767  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:50.244722   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.245526   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.247105   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.247464   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.248997   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:50.244722   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.245526   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.247105   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.247464   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.248997   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:50.252777  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:50.252790  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:50.314222  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:50.314241  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:52.842532  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:52.853108  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:52.853184  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:52.880391  359214 cri.go:89] found id: ""
	I1213 10:40:52.880412  359214 logs.go:282] 0 containers: []
	W1213 10:40:52.880420  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:52.880426  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:52.880487  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:52.905175  359214 cri.go:89] found id: ""
	I1213 10:40:52.905189  359214 logs.go:282] 0 containers: []
	W1213 10:40:52.905197  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:52.905202  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:52.905279  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:52.934872  359214 cri.go:89] found id: ""
	I1213 10:40:52.934887  359214 logs.go:282] 0 containers: []
	W1213 10:40:52.934894  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:52.934900  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:52.934956  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:52.960307  359214 cri.go:89] found id: ""
	I1213 10:40:52.960321  359214 logs.go:282] 0 containers: []
	W1213 10:40:52.960329  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:52.960334  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:52.960390  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:52.985363  359214 cri.go:89] found id: ""
	I1213 10:40:52.985377  359214 logs.go:282] 0 containers: []
	W1213 10:40:52.985385  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:52.985390  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:52.985453  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:53.011565  359214 cri.go:89] found id: ""
	I1213 10:40:53.011581  359214 logs.go:282] 0 containers: []
	W1213 10:40:53.011589  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:53.011594  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:53.011657  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:53.036397  359214 cri.go:89] found id: ""
	I1213 10:40:53.036412  359214 logs.go:282] 0 containers: []
	W1213 10:40:53.036420  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:53.036428  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:53.036438  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:53.091583  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:53.091603  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:53.107990  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:53.108007  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:53.173876  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:53.164848   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.165601   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.167336   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.167976   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.169634   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:53.164848   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.165601   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.167336   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.167976   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.169634   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:53.173886  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:53.173897  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:53.238989  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:53.239009  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:55.773075  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:55.783512  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:55.783574  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:55.807988  359214 cri.go:89] found id: ""
	I1213 10:40:55.808002  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.808009  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:55.808014  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:55.808073  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:55.831609  359214 cri.go:89] found id: ""
	I1213 10:40:55.831624  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.831632  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:55.831637  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:55.831696  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:55.856162  359214 cri.go:89] found id: ""
	I1213 10:40:55.856177  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.856184  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:55.856190  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:55.856247  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:55.883604  359214 cri.go:89] found id: ""
	I1213 10:40:55.883619  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.883626  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:55.883631  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:55.883695  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:55.907679  359214 cri.go:89] found id: ""
	I1213 10:40:55.907694  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.907701  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:55.907706  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:55.907764  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:55.932970  359214 cri.go:89] found id: ""
	I1213 10:40:55.932984  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.932991  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:55.932996  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:55.933057  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:55.956837  359214 cri.go:89] found id: ""
	I1213 10:40:55.956851  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.956858  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:55.956866  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:55.956877  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:56.030354  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:56.021163   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.021989   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.023979   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.024615   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.026271   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:56.021163   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.021989   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.023979   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.024615   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.026271   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:56.030364  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:56.030376  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:56.092205  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:56.092226  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:56.119616  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:56.119633  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:56.177084  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:56.177103  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:58.695794  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:58.706025  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:58.706086  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:58.729634  359214 cri.go:89] found id: ""
	I1213 10:40:58.729647  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.729654  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:58.729659  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:58.729718  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:58.753786  359214 cri.go:89] found id: ""
	I1213 10:40:58.753800  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.753808  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:58.753813  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:58.753874  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:58.778478  359214 cri.go:89] found id: ""
	I1213 10:40:58.778491  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.778498  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:58.778503  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:58.778560  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:58.803243  359214 cri.go:89] found id: ""
	I1213 10:40:58.803258  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.803274  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:58.803280  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:58.803342  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:58.827435  359214 cri.go:89] found id: ""
	I1213 10:40:58.827449  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.827457  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:58.827462  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:58.827526  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:58.852612  359214 cri.go:89] found id: ""
	I1213 10:40:58.852627  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.852635  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:58.852640  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:58.852702  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:58.879181  359214 cri.go:89] found id: ""
	I1213 10:40:58.879195  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.879202  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:58.879210  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:58.879224  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:58.940146  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:58.940166  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:58.969086  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:58.969104  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:59.027812  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:59.027832  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:59.044161  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:59.044180  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:59.107958  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:59.099940   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.100731   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.102281   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.102588   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.104070   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:59.099940   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.100731   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.102281   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.102588   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.104070   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:01.608222  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:01.619072  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:01.619137  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:01.644559  359214 cri.go:89] found id: ""
	I1213 10:41:01.644574  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.644582  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:01.644587  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:01.644690  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:01.673686  359214 cri.go:89] found id: ""
	I1213 10:41:01.673701  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.673709  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:01.673714  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:01.673776  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:01.700231  359214 cri.go:89] found id: ""
	I1213 10:41:01.700246  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.700253  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:01.700259  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:01.700317  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:01.729867  359214 cri.go:89] found id: ""
	I1213 10:41:01.729883  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.729890  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:01.729895  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:01.729954  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:01.754275  359214 cri.go:89] found id: ""
	I1213 10:41:01.754289  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.754297  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:01.754302  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:01.754362  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:01.780449  359214 cri.go:89] found id: ""
	I1213 10:41:01.780464  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.780472  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:01.780477  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:01.780533  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:01.806614  359214 cri.go:89] found id: ""
	I1213 10:41:01.806638  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.806646  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:01.806654  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:01.806666  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:01.872660  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:01.872681  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:01.908081  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:01.908099  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:01.965082  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:01.965103  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:01.982015  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:01.982033  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:02.054794  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:02.045518   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.046349   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.047002   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.048599   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.049133   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:02.045518   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.046349   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.047002   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.048599   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.049133   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:04.555147  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:04.565791  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:04.565856  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:04.591956  359214 cri.go:89] found id: ""
	I1213 10:41:04.591971  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.591978  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:04.591984  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:04.592045  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:04.615698  359214 cri.go:89] found id: ""
	I1213 10:41:04.615713  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.615720  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:04.615725  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:04.615786  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:04.640509  359214 cri.go:89] found id: ""
	I1213 10:41:04.640523  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.640531  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:04.640538  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:04.640596  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:04.665547  359214 cri.go:89] found id: ""
	I1213 10:41:04.665562  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.665569  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:04.665577  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:04.665637  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:04.690947  359214 cri.go:89] found id: ""
	I1213 10:41:04.690961  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.690969  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:04.690974  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:04.691037  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:04.720397  359214 cri.go:89] found id: ""
	I1213 10:41:04.720421  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.720429  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:04.720435  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:04.720492  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:04.750207  359214 cri.go:89] found id: ""
	I1213 10:41:04.750233  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.750241  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:04.750250  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:04.750261  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:04.814350  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:04.806033   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.806630   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.808181   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.808726   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.810316   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:04.806033   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.806630   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.808181   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.808726   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.810316   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:04.814360  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:04.814381  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:04.876775  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:04.876798  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:04.904820  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:04.904836  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:04.962939  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:04.962958  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:07.479750  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:07.489681  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:07.489740  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:07.516670  359214 cri.go:89] found id: ""
	I1213 10:41:07.516684  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.516691  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:07.516697  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:07.516754  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:07.541873  359214 cri.go:89] found id: ""
	I1213 10:41:07.541888  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.541895  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:07.541900  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:07.541958  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:07.567390  359214 cri.go:89] found id: ""
	I1213 10:41:07.567404  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.567411  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:07.567416  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:07.567476  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:07.595533  359214 cri.go:89] found id: ""
	I1213 10:41:07.595546  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.595553  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:07.595559  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:07.595624  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:07.619449  359214 cri.go:89] found id: ""
	I1213 10:41:07.619463  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.619470  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:07.619476  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:07.619535  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:07.646270  359214 cri.go:89] found id: ""
	I1213 10:41:07.646284  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.646291  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:07.646297  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:07.646356  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:07.671609  359214 cri.go:89] found id: ""
	I1213 10:41:07.671623  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.671630  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:07.671638  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:07.671648  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:07.726992  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:07.727010  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:07.743360  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:07.743377  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:07.805371  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:07.797570   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.797988   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.799538   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.799877   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.801379   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:07.797570   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.797988   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.799538   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.799877   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.801379   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:07.805381  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:07.805393  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:07.867093  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:07.867115  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:10.399083  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:10.409097  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:10.409158  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:10.444135  359214 cri.go:89] found id: ""
	I1213 10:41:10.444149  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.444157  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:10.444162  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:10.444224  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:10.476756  359214 cri.go:89] found id: ""
	I1213 10:41:10.476771  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.476778  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:10.476784  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:10.476842  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:10.501876  359214 cri.go:89] found id: ""
	I1213 10:41:10.501890  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.501898  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:10.501903  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:10.501962  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:10.526921  359214 cri.go:89] found id: ""
	I1213 10:41:10.526936  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.526943  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:10.526949  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:10.527008  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:10.560474  359214 cri.go:89] found id: ""
	I1213 10:41:10.560489  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.560496  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:10.560501  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:10.560560  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:10.589176  359214 cri.go:89] found id: ""
	I1213 10:41:10.589190  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.589209  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:10.589215  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:10.589301  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:10.614119  359214 cri.go:89] found id: ""
	I1213 10:41:10.614139  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.614146  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:10.614155  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:10.614165  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:10.669835  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:10.669856  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:10.687547  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:10.687564  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:10.753151  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:10.744373   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.744993   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.747393   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.747860   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.749416   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:10.744373   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.744993   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.747393   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.747860   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.749416   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:10.753161  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:10.753175  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:10.825142  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:10.825173  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:13.352978  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:13.363579  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:13.363649  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:13.392544  359214 cri.go:89] found id: ""
	I1213 10:41:13.392558  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.392565  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:13.392571  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:13.392668  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:13.431393  359214 cri.go:89] found id: ""
	I1213 10:41:13.431407  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.431424  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:13.431430  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:13.431498  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:13.467012  359214 cri.go:89] found id: ""
	I1213 10:41:13.467027  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.467034  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:13.467040  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:13.467114  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:13.495958  359214 cri.go:89] found id: ""
	I1213 10:41:13.495972  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.495990  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:13.495996  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:13.496061  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:13.521376  359214 cri.go:89] found id: ""
	I1213 10:41:13.521399  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.521408  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:13.521413  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:13.521480  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:13.548831  359214 cri.go:89] found id: ""
	I1213 10:41:13.548845  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.548852  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:13.548858  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:13.548920  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:13.574611  359214 cri.go:89] found id: ""
	I1213 10:41:13.574626  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.574633  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:13.574661  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:13.574673  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:13.631156  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:13.631175  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:13.647668  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:13.647685  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:13.712729  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:13.703922   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.704556   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.706153   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.706621   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.708279   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:13.703922   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.704556   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.706153   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.706621   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.708279   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:13.712740  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:13.712752  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:13.776779  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:13.776799  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:16.310332  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:16.320699  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:16.320761  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:16.344441  359214 cri.go:89] found id: ""
	I1213 10:41:16.344455  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.344462  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:16.344468  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:16.344529  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:16.372703  359214 cri.go:89] found id: ""
	I1213 10:41:16.372717  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.372725  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:16.372730  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:16.372789  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:16.397701  359214 cri.go:89] found id: ""
	I1213 10:41:16.397715  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.397723  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:16.397728  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:16.397785  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:16.436711  359214 cri.go:89] found id: ""
	I1213 10:41:16.436726  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.436733  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:16.436739  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:16.436795  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:16.471220  359214 cri.go:89] found id: ""
	I1213 10:41:16.471235  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.471243  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:16.471248  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:16.471306  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:16.498773  359214 cri.go:89] found id: ""
	I1213 10:41:16.498788  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.498796  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:16.498801  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:16.498861  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:16.523734  359214 cri.go:89] found id: ""
	I1213 10:41:16.523749  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.523756  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:16.523764  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:16.523775  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:16.554346  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:16.554364  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:16.610645  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:16.610665  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:16.626953  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:16.626970  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:16.691344  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:16.682639   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.683311   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.685086   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.685793   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.687420   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:16.682639   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.683311   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.685086   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.685793   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.687420   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:16.691354  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:16.691367  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:19.255129  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:19.265879  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:19.265940  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:19.291837  359214 cri.go:89] found id: ""
	I1213 10:41:19.291851  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.291859  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:19.291864  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:19.291923  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:19.315964  359214 cri.go:89] found id: ""
	I1213 10:41:19.315978  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.315985  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:19.315990  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:19.316046  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:19.343352  359214 cri.go:89] found id: ""
	I1213 10:41:19.343366  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.343373  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:19.343378  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:19.343434  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:19.367745  359214 cri.go:89] found id: ""
	I1213 10:41:19.367760  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.367767  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:19.367773  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:19.367830  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:19.391416  359214 cri.go:89] found id: ""
	I1213 10:41:19.391429  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.391437  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:19.391442  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:19.391503  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:19.420969  359214 cri.go:89] found id: ""
	I1213 10:41:19.420982  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.420989  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:19.420995  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:19.421051  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:19.459512  359214 cri.go:89] found id: ""
	I1213 10:41:19.459528  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.459536  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:19.459544  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:19.459555  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:19.490208  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:19.490224  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:19.546240  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:19.546261  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:19.562645  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:19.562664  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:19.625588  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:19.617541   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.617927   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.619446   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.619795   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.621463   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:19.617541   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.617927   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.619446   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.619795   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.621463   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:19.625599  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:19.625610  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:22.187966  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:22.198583  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:22.198650  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:22.223213  359214 cri.go:89] found id: ""
	I1213 10:41:22.223227  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.223240  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:22.223246  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:22.223303  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:22.248552  359214 cri.go:89] found id: ""
	I1213 10:41:22.248567  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.248574  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:22.248579  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:22.248641  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:22.273682  359214 cri.go:89] found id: ""
	I1213 10:41:22.273697  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.273714  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:22.273720  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:22.273802  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:22.299868  359214 cri.go:89] found id: ""
	I1213 10:41:22.299883  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.299891  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:22.299896  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:22.299962  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:22.325309  359214 cri.go:89] found id: ""
	I1213 10:41:22.325324  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.325331  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:22.325337  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:22.325399  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:22.354179  359214 cri.go:89] found id: ""
	I1213 10:41:22.354193  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.354200  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:22.354205  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:22.354261  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:22.378958  359214 cri.go:89] found id: ""
	I1213 10:41:22.378980  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.378987  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:22.378997  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:22.379007  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:22.440927  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:22.440949  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:22.460102  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:22.460120  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:22.529575  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:22.521290   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.521799   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.523477   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.524007   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.525558   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:22.521290   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.521799   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.523477   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.524007   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.525558   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:22.529585  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:22.529595  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:22.592904  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:22.592925  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:25.122090  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:25.132657  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:25.132721  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:25.159021  359214 cri.go:89] found id: ""
	I1213 10:41:25.159036  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.159044  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:25.159049  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:25.159111  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:25.185666  359214 cri.go:89] found id: ""
	I1213 10:41:25.185691  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.185700  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:25.185706  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:25.185787  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:25.211201  359214 cri.go:89] found id: ""
	I1213 10:41:25.211216  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.211223  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:25.211228  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:25.211288  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:25.241164  359214 cri.go:89] found id: ""
	I1213 10:41:25.241178  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.241185  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:25.241191  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:25.241259  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:25.266721  359214 cri.go:89] found id: ""
	I1213 10:41:25.266737  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.266745  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:25.266751  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:25.266815  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:25.292241  359214 cri.go:89] found id: ""
	I1213 10:41:25.292255  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.292263  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:25.292272  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:25.292332  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:25.317411  359214 cri.go:89] found id: ""
	I1213 10:41:25.317441  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.317450  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:25.317458  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:25.317469  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:25.373328  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:25.373348  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:25.390032  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:25.390057  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:25.483290  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:25.471186   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.471638   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.473963   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.474270   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.475696   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:25.471186   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.471638   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.473963   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.474270   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.475696   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:25.483300  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:25.483311  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:25.544908  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:25.544930  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:28.078163  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:28.091034  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:28.091099  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:28.115911  359214 cri.go:89] found id: ""
	I1213 10:41:28.115925  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.115934  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:28.115940  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:28.116004  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:28.139316  359214 cri.go:89] found id: ""
	I1213 10:41:28.139330  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.139338  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:28.139343  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:28.139399  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:28.164405  359214 cri.go:89] found id: ""
	I1213 10:41:28.164420  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.164427  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:28.164434  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:28.164494  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:28.193103  359214 cri.go:89] found id: ""
	I1213 10:41:28.193117  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.193130  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:28.193136  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:28.193191  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:28.218193  359214 cri.go:89] found id: ""
	I1213 10:41:28.218207  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.218214  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:28.218219  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:28.218277  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:28.246727  359214 cri.go:89] found id: ""
	I1213 10:41:28.246741  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.246748  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:28.246754  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:28.246828  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:28.272720  359214 cri.go:89] found id: ""
	I1213 10:41:28.272735  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.272753  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:28.272761  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:28.272771  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:28.329731  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:28.329751  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:28.345935  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:28.345953  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:28.409004  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:28.400511   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.401329   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.403117   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.403657   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.404653   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:28.400511   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.401329   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.403117   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.403657   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.404653   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:28.409014  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:28.409024  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:28.475582  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:28.475603  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:31.008193  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:31.019100  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:31.019165  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:31.043886  359214 cri.go:89] found id: ""
	I1213 10:41:31.043907  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.043915  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:31.043921  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:31.043987  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:31.069993  359214 cri.go:89] found id: ""
	I1213 10:41:31.070008  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.070016  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:31.070022  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:31.070089  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:31.098048  359214 cri.go:89] found id: ""
	I1213 10:41:31.098075  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.098083  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:31.098089  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:31.098161  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:31.123592  359214 cri.go:89] found id: ""
	I1213 10:41:31.123608  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.123616  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:31.123621  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:31.123686  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:31.151147  359214 cri.go:89] found id: ""
	I1213 10:41:31.151163  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.151171  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:31.151177  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:31.151244  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:31.181236  359214 cri.go:89] found id: ""
	I1213 10:41:31.181257  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.181265  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:31.181270  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:31.181332  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:31.210269  359214 cri.go:89] found id: ""
	I1213 10:41:31.210283  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.210303  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:31.210311  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:31.210325  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:31.227244  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:31.227261  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:31.293720  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:31.285094   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.285961   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.287612   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.287962   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.289354   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:31.285094   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.285961   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.287612   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.287962   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.289354   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:31.293731  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:31.293745  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:31.357626  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:31.357648  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:31.386271  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:31.386288  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:33.948226  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:33.958367  359214 kubeadm.go:602] duration metric: took 4m4.333187147s to restartPrimaryControlPlane
	W1213 10:41:33.958431  359214 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 10:41:33.958502  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 10:41:34.375262  359214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:41:34.388893  359214 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:41:34.396960  359214 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:41:34.397012  359214 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:41:34.404696  359214 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:41:34.404706  359214 kubeadm.go:158] found existing configuration files:
	
	I1213 10:41:34.404755  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:41:34.412350  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:41:34.412405  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:41:34.419971  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:41:34.427828  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:41:34.427887  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:41:34.435644  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:41:34.443354  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:41:34.443408  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:41:34.451024  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:41:34.458860  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:41:34.458918  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:41:34.466249  359214 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:41:34.504797  359214 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:41:34.504845  359214 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:41:34.587434  359214 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:41:34.587499  359214 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:41:34.587534  359214 kubeadm.go:319] OS: Linux
	I1213 10:41:34.587577  359214 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:41:34.587624  359214 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:41:34.587670  359214 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:41:34.587717  359214 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:41:34.587764  359214 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:41:34.587816  359214 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:41:34.587860  359214 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:41:34.587906  359214 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:41:34.587951  359214 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:41:34.656000  359214 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:41:34.656112  359214 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:41:34.656196  359214 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:41:34.661831  359214 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:41:34.665544  359214 out.go:252]   - Generating certificates and keys ...
	I1213 10:41:34.665620  359214 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:41:34.665681  359214 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:41:34.665752  359214 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:41:34.665808  359214 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:41:34.665873  359214 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:41:34.665922  359214 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:41:34.665981  359214 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:41:34.666037  359214 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:41:34.666107  359214 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:41:34.666174  359214 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:41:34.666208  359214 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:41:34.666259  359214 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:41:35.121283  359214 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:41:35.663053  359214 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:41:35.746928  359214 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:41:35.962879  359214 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:41:36.165716  359214 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:41:36.166361  359214 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:41:36.169355  359214 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:41:36.172503  359214 out.go:252]   - Booting up control plane ...
	I1213 10:41:36.172623  359214 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:41:36.172875  359214 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:41:36.174488  359214 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:41:36.195010  359214 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:41:36.195108  359214 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:41:36.203505  359214 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:41:36.203828  359214 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:41:36.204072  359214 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:41:36.339853  359214 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:41:36.339968  359214 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:45:36.340589  359214 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00099636s
	I1213 10:45:36.340614  359214 kubeadm.go:319] 
	I1213 10:45:36.340667  359214 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:45:36.340697  359214 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:45:36.340795  359214 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:45:36.340800  359214 kubeadm.go:319] 
	I1213 10:45:36.340897  359214 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:45:36.340926  359214 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:45:36.340953  359214 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:45:36.340956  359214 kubeadm.go:319] 
	I1213 10:45:36.344674  359214 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:45:36.345121  359214 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:45:36.345236  359214 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:45:36.345471  359214 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:45:36.345476  359214 kubeadm.go:319] 
	I1213 10:45:36.345548  359214 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 10:45:36.345669  359214 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00099636s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 10:45:36.345754  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 10:45:36.752142  359214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:45:36.765694  359214 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:45:36.765753  359214 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:45:36.773442  359214 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:45:36.773451  359214 kubeadm.go:158] found existing configuration files:
	
	I1213 10:45:36.773504  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:45:36.781648  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:45:36.781706  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:45:36.789406  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:45:36.797582  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:45:36.797641  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:45:36.805463  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:45:36.813325  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:45:36.813378  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:45:36.820926  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:45:36.828930  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:45:36.828988  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:45:36.836622  359214 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:45:36.877023  359214 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:45:36.877075  359214 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:45:36.946303  359214 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:45:36.946364  359214 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:45:36.946398  359214 kubeadm.go:319] OS: Linux
	I1213 10:45:36.946444  359214 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:45:36.946489  359214 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:45:36.946532  359214 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:45:36.946576  359214 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:45:36.946620  359214 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:45:36.946665  359214 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:45:36.946727  359214 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:45:36.946771  359214 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:45:36.946813  359214 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:45:37.023251  359214 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:45:37.023367  359214 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:45:37.023453  359214 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:45:37.035188  359214 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:45:37.040505  359214 out.go:252]   - Generating certificates and keys ...
	I1213 10:45:37.040588  359214 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:45:37.040657  359214 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:45:37.040732  359214 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:45:37.040792  359214 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:45:37.040860  359214 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:45:37.040912  359214 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:45:37.040974  359214 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:45:37.041034  359214 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:45:37.041112  359214 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:45:37.041183  359214 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:45:37.041219  359214 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:45:37.041274  359214 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:45:37.085508  359214 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:45:37.524146  359214 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:45:37.643175  359214 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:45:38.077377  359214 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:45:38.482147  359214 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:45:38.482682  359214 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:45:38.485202  359214 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:45:38.490562  359214 out.go:252]   - Booting up control plane ...
	I1213 10:45:38.490673  359214 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:45:38.490778  359214 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:45:38.490854  359214 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:45:38.510040  359214 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:45:38.510136  359214 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:45:38.518983  359214 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:45:38.519096  359214 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:45:38.519153  359214 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:45:38.652209  359214 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:45:38.652350  359214 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:49:38.651567  359214 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001187482s
	I1213 10:49:38.651592  359214 kubeadm.go:319] 
	I1213 10:49:38.651654  359214 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:49:38.651686  359214 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:49:38.651792  359214 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:49:38.651797  359214 kubeadm.go:319] 
	I1213 10:49:38.651939  359214 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:49:38.651995  359214 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:49:38.652034  359214 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:49:38.652037  359214 kubeadm.go:319] 
	I1213 10:49:38.656860  359214 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:49:38.657251  359214 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:49:38.657352  359214 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:49:38.657572  359214 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:49:38.657576  359214 kubeadm.go:319] 
	I1213 10:49:38.657639  359214 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:49:38.657718  359214 kubeadm.go:403] duration metric: took 12m9.068082439s to StartCluster
	I1213 10:49:38.657750  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:49:38.657821  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:49:38.689768  359214 cri.go:89] found id: ""
	I1213 10:49:38.689783  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.689798  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:49:38.689803  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:49:38.689865  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:49:38.719427  359214 cri.go:89] found id: ""
	I1213 10:49:38.719441  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.719449  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:49:38.719455  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:49:38.719513  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:49:38.747452  359214 cri.go:89] found id: ""
	I1213 10:49:38.747466  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.747474  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:49:38.747480  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:49:38.747544  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:49:38.772270  359214 cri.go:89] found id: ""
	I1213 10:49:38.772286  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.772293  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:49:38.772298  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:49:38.772358  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:49:38.796548  359214 cri.go:89] found id: ""
	I1213 10:49:38.796562  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.796570  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:49:38.796575  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:49:38.796633  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:49:38.825383  359214 cri.go:89] found id: ""
	I1213 10:49:38.825397  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.825404  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:49:38.825410  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:49:38.825467  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:49:38.854743  359214 cri.go:89] found id: ""
	I1213 10:49:38.854758  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.854765  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:49:38.854775  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:49:38.854785  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:49:38.911438  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:49:38.911459  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:49:38.928194  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:49:38.928212  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:49:38.993056  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:49:38.985025   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.985836   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.987445   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.987763   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.989301   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:49:38.985025   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.985836   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.987445   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.987763   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.989301   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:49:38.993068  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:49:38.993079  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:49:39.059560  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:49:39.059584  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:49:39.090490  359214 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:49:39.090521  359214 out.go:285] * 
	W1213 10:49:39.090586  359214 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:49:39.090603  359214 out.go:285] * 
	W1213 10:49:39.092733  359214 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:49:39.097735  359214 out.go:203] 
	W1213 10:49:39.101721  359214 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:49:39.101772  359214 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:49:39.101799  359214 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:49:39.104924  359214 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861227644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861318114Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861438764Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861513571Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861578449Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861642483Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861707304Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861776350Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861845545Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861934818Z" level=info msg="Connect containerd service"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.862289545Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.862951451Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.874919104Z" level=info msg="Start subscribing containerd event"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.875103516Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.875569851Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.881349344Z" level=info msg="Start recovering state"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.920785039Z" level=info msg="Start event monitor"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921012364Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921112731Z" level=info msg="Start streaming server"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921198171Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921421730Z" level=info msg="runtime interface starting up..."
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921496201Z" level=info msg="starting plugins..."
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921561104Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:37:27 functional-652709 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.922785206Z" level=info msg="containerd successfully booted in 0.088911s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:05.053130   23324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:05.053682   23324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:05.055308   23324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:05.055759   23324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:05.057385   23324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +12.907693] overlayfs: idmapped layers are currently not supported
	[Dec13 09:25] overlayfs: idmapped layers are currently not supported
	[ +26.192425] overlayfs: idmapped layers are currently not supported
	[Dec13 09:26] overlayfs: idmapped layers are currently not supported
	[ +25.729788] overlayfs: idmapped layers are currently not supported
	[Dec13 09:27] overlayfs: idmapped layers are currently not supported
	[Dec13 09:28] overlayfs: idmapped layers are currently not supported
	[Dec13 09:31] overlayfs: idmapped layers are currently not supported
	[Dec13 09:32] overlayfs: idmapped layers are currently not supported
	[ +17.979093] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec13 09:43] overlayfs: idmapped layers are currently not supported
	[Dec13 09:45] overlayfs: idmapped layers are currently not supported
	[ +25.885102] overlayfs: idmapped layers are currently not supported
	[Dec13 09:46] overlayfs: idmapped layers are currently not supported
	[ +22.078149] overlayfs: idmapped layers are currently not supported
	[Dec13 09:47] overlayfs: idmapped layers are currently not supported
	[Dec13 09:48] overlayfs: idmapped layers are currently not supported
	[Dec13 09:49] overlayfs: idmapped layers are currently not supported
	[Dec13 09:51] overlayfs: idmapped layers are currently not supported
	[ +17.043564] overlayfs: idmapped layers are currently not supported
	[Dec13 09:52] overlayfs: idmapped layers are currently not supported
	[Dec13 09:53] overlayfs: idmapped layers are currently not supported
	[Dec13 10:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:19] hrtimer: interrupt took 21247146 ns
	
	
	==> kernel <==
	 10:52:05 up  3:34,  0 user,  load average: 1.17, 0.42, 0.51
	Linux functional-652709 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:52:02 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:52:02 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 512.
	Dec 13 10:52:02 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:02 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:02 functional-652709 kubelet[23156]: E1213 10:52:02.714733   23156 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:52:02 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:52:02 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:52:03 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 513.
	Dec 13 10:52:03 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:03 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:03 functional-652709 kubelet[23195]: E1213 10:52:03.472169   23195 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:52:03 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:52:03 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:52:04 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 514.
	Dec 13 10:52:04 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:04 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:04 functional-652709 kubelet[23231]: E1213 10:52:04.157271   23231 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:52:04 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:52:04 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:52:04 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 515.
	Dec 13 10:52:04 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:04 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:04 functional-652709 kubelet[23304]: E1213 10:52:04.968296   23304 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:52:04 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:52:04 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709: exit status 2 (375.573011ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-652709" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-652709 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-652709 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (55.785379ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-652709 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-652709 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-652709 describe po hello-node-connect: exit status 1 (61.67096ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-652709 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-652709 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-652709 logs -l app=hello-node-connect: exit status 1 (54.343395ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-652709 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-652709 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-652709 describe svc hello-node-connect: exit status 1 (67.044758ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-652709 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-652709
helpers_test.go:244: (dbg) docker inspect functional-652709:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f",
	        "Created": "2025-12-13T10:22:44.366993781Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 347931,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:22:44.437030763Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/hosts",
	        "LogPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f-json.log",
	        "Name": "/functional-652709",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-652709:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-652709",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f",
	                "LowerDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095-init/diff:/var/lib/docker/overlay2/5d0ca86320f2c06765995d644a73af30cd6ddb36f5c1a2a6b1ebb24af53cdabd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/merged",
	                "UpperDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/diff",
	                "WorkDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-652709",
	                "Source": "/var/lib/docker/volumes/functional-652709/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-652709",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-652709",
	                "name.minikube.sigs.k8s.io": "functional-652709",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "52e527b5bd789a02eb7efb651200033ed4929e5fc7545e9df042d3f777cc9782",
	            "SandboxKey": "/var/run/docker/netns/52e527b5bd78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-652709": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:23:08:9e:cb:13",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "344f2b940117dadb28d1ef1328f911c0446307288fdfafebfe59f38e473f79cb",
	                    "EndpointID": "8954f96e5987202be5715e7023384fe862744778b2520bccba28c57814f0980f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-652709",
	                        "0f6101071ca2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-652709 -n functional-652709
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-652709 -n functional-652709: exit status 2 (347.292952ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ functional-652709 cache reload                                                                                                                              │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ ssh     │ functional-652709 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                     │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                         │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │ 13 Dec 25 10:37 UTC │
	│ kubectl │ functional-652709 kubectl -- --context functional-652709 get pods                                                                                           │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │                     │
	│ start   │ -p functional-652709 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                    │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:37 UTC │                     │
	│ config  │ functional-652709 config unset cpus                                                                                                                         │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cp      │ functional-652709 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                          │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ config  │ functional-652709 config get cpus                                                                                                                           │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │                     │
	│ config  │ functional-652709 config set cpus 2                                                                                                                         │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ config  │ functional-652709 config get cpus                                                                                                                           │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ config  │ functional-652709 config unset cpus                                                                                                                         │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ ssh     │ functional-652709 ssh -n functional-652709 sudo cat /home/docker/cp-test.txt                                                                                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ config  │ functional-652709 config get cpus                                                                                                                           │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │                     │
	│ ssh     │ functional-652709 ssh echo hello                                                                                                                            │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cp      │ functional-652709 cp functional-652709:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp425140614/001/cp-test.txt │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ ssh     │ functional-652709 ssh cat /etc/hostname                                                                                                                     │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ ssh     │ functional-652709 ssh -n functional-652709 sudo cat /home/docker/cp-test.txt                                                                                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ tunnel  │ functional-652709 tunnel --alsologtostderr                                                                                                                  │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │                     │
	│ tunnel  │ functional-652709 tunnel --alsologtostderr                                                                                                                  │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │                     │
	│ cp      │ functional-652709 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                   │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ tunnel  │ functional-652709 tunnel --alsologtostderr                                                                                                                  │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │                     │
	│ ssh     │ functional-652709 ssh -n functional-652709 sudo cat /tmp/does/not/exist/cp-test.txt                                                                         │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ addons  │ functional-652709 addons list                                                                                                                               │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │ 13 Dec 25 10:51 UTC │
	│ addons  │ functional-652709 addons list -o json                                                                                                                       │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:51 UTC │ 13 Dec 25 10:51 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:37:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:37:25.138350  359214 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:37:25.138465  359214 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:37:25.138469  359214 out.go:374] Setting ErrFile to fd 2...
	I1213 10:37:25.138473  359214 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:37:25.138742  359214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 10:37:25.139091  359214 out.go:368] Setting JSON to false
	I1213 10:37:25.139911  359214 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11998,"bootTime":1765610247,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 10:37:25.139964  359214 start.go:143] virtualization:  
	I1213 10:37:25.143535  359214 out.go:179] * [functional-652709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:37:25.146407  359214 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:37:25.146500  359214 notify.go:221] Checking for updates...
	I1213 10:37:25.152371  359214 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:37:25.155287  359214 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:37:25.158064  359214 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 10:37:25.162885  359214 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:37:25.165865  359214 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:37:25.169282  359214 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:37:25.169378  359214 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:37:25.203946  359214 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:37:25.204073  359214 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:37:25.282140  359214 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 10:37:25.272517516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:37:25.282233  359214 docker.go:319] overlay module found
	I1213 10:37:25.285314  359214 out.go:179] * Using the docker driver based on existing profile
	I1213 10:37:25.288091  359214 start.go:309] selected driver: docker
	I1213 10:37:25.288098  359214 start.go:927] validating driver "docker" against &{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:37:25.288215  359214 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:37:25.288310  359214 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:37:25.346233  359214 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 10:37:25.336833323 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:37:25.346649  359214 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:37:25.346672  359214 cni.go:84] Creating CNI manager for ""
	I1213 10:37:25.346746  359214 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:37:25.346788  359214 start.go:353] cluster config:
	{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:37:25.351648  359214 out.go:179] * Starting "functional-652709" primary control-plane node in "functional-652709" cluster
	I1213 10:37:25.354472  359214 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 10:37:25.357365  359214 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:37:25.360240  359214 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:37:25.360279  359214 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 10:37:25.360290  359214 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:37:25.360305  359214 cache.go:65] Caching tarball of preloaded images
	I1213 10:37:25.360390  359214 preload.go:238] Found /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 10:37:25.360398  359214 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 10:37:25.360508  359214 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/config.json ...
	I1213 10:37:25.379669  359214 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:37:25.379680  359214 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:37:25.379701  359214 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:37:25.379731  359214 start.go:360] acquireMachinesLock for functional-652709: {Name:mk6e8c40fbbb5af0bb2468340fd710875030300d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:37:25.379795  359214 start.go:364] duration metric: took 46.958µs to acquireMachinesLock for "functional-652709"
	I1213 10:37:25.379812  359214 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:37:25.379817  359214 fix.go:54] fixHost starting: 
	I1213 10:37:25.380078  359214 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
	I1213 10:37:25.396614  359214 fix.go:112] recreateIfNeeded on functional-652709: state=Running err=<nil>
	W1213 10:37:25.396632  359214 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:37:25.399750  359214 out.go:252] * Updating the running docker "functional-652709" container ...
	I1213 10:37:25.399771  359214 machine.go:94] provisionDockerMachine start ...
	I1213 10:37:25.399844  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:25.416990  359214 main.go:143] libmachine: Using SSH client type: native
	I1213 10:37:25.417324  359214 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:37:25.417330  359214 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:37:25.566232  359214 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-652709
	
	I1213 10:37:25.566247  359214 ubuntu.go:182] provisioning hostname "functional-652709"
	I1213 10:37:25.566312  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:25.583930  359214 main.go:143] libmachine: Using SSH client type: native
	I1213 10:37:25.584239  359214 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:37:25.584247  359214 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-652709 && echo "functional-652709" | sudo tee /etc/hostname
	I1213 10:37:25.743712  359214 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-652709
	
	I1213 10:37:25.743781  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:25.761387  359214 main.go:143] libmachine: Using SSH client type: native
	I1213 10:37:25.761683  359214 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33125 <nil> <nil>}
	I1213 10:37:25.761697  359214 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-652709' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-652709/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-652709' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:37:25.915528  359214 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:37:25.915543  359214 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 10:37:25.915567  359214 ubuntu.go:190] setting up certificates
	I1213 10:37:25.915589  359214 provision.go:84] configureAuth start
	I1213 10:37:25.915650  359214 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-652709
	I1213 10:37:25.937241  359214 provision.go:143] copyHostCerts
	I1213 10:37:25.937315  359214 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 10:37:25.937323  359214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 10:37:25.937397  359214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 10:37:25.937493  359214 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 10:37:25.937497  359214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 10:37:25.937521  359214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 10:37:25.937570  359214 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 10:37:25.937573  359214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 10:37:25.937593  359214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 10:37:25.937635  359214 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.functional-652709 san=[127.0.0.1 192.168.49.2 functional-652709 localhost minikube]
	I1213 10:37:26.244127  359214 provision.go:177] copyRemoteCerts
	I1213 10:37:26.244186  359214 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:37:26.244225  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:26.264658  359214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:37:26.370401  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:37:26.387044  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:37:26.404259  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:37:26.421389  359214 provision.go:87] duration metric: took 505.777833ms to configureAuth
	I1213 10:37:26.421407  359214 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:37:26.421614  359214 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:37:26.421620  359214 machine.go:97] duration metric: took 1.021844371s to provisionDockerMachine
	I1213 10:37:26.421627  359214 start.go:293] postStartSetup for "functional-652709" (driver="docker")
	I1213 10:37:26.421636  359214 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:37:26.421692  359214 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:37:26.421728  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:26.439115  359214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:37:26.542461  359214 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:37:26.545680  359214 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:37:26.545698  359214 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:37:26.545710  359214 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 10:37:26.545763  359214 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 10:37:26.545836  359214 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 10:37:26.545911  359214 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts -> hosts in /etc/test/nested/copy/308915
	I1213 10:37:26.545959  359214 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/308915
	I1213 10:37:26.553760  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 10:37:26.571190  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts --> /etc/test/nested/copy/308915/hosts (40 bytes)
	I1213 10:37:26.588882  359214 start.go:296] duration metric: took 167.239997ms for postStartSetup
	I1213 10:37:26.588951  359214 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:37:26.588988  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:26.606145  359214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:37:26.708907  359214 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:37:26.713681  359214 fix.go:56] duration metric: took 1.333856829s for fixHost
	I1213 10:37:26.713698  359214 start.go:83] releasing machines lock for "functional-652709", held for 1.333895015s
	I1213 10:37:26.713781  359214 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-652709
	I1213 10:37:26.733362  359214 ssh_runner.go:195] Run: cat /version.json
	I1213 10:37:26.733405  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:26.733670  359214 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:37:26.733727  359214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
	I1213 10:37:26.755898  359214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:37:26.764378  359214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
	I1213 10:37:26.858420  359214 ssh_runner.go:195] Run: systemctl --version
	I1213 10:37:26.952524  359214 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:37:26.956969  359214 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:37:26.957030  359214 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:37:26.964724  359214 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:37:26.964738  359214 start.go:496] detecting cgroup driver to use...
	I1213 10:37:26.964768  359214 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:37:26.964823  359214 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 10:37:26.980031  359214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:37:26.993058  359214 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:37:26.993140  359214 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:37:27.016019  359214 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:37:27.029352  359214 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:37:27.143876  359214 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:37:27.259911  359214 docker.go:234] disabling docker service ...
	I1213 10:37:27.259973  359214 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:37:27.275304  359214 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:37:27.288715  359214 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:37:27.403391  359214 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:37:27.538286  359214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:37:27.551384  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:37:27.565344  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:37:27.574020  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:37:27.583189  359214 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:37:27.583255  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:37:27.591895  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:37:27.600966  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:37:27.609996  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:37:27.618821  359214 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:37:27.626864  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:37:27.635612  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:37:27.644477  359214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:37:27.653477  359214 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:37:27.661005  359214 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:37:27.668365  359214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:37:27.776281  359214 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:37:27.924718  359214 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 10:37:27.924777  359214 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 10:37:27.928729  359214 start.go:564] Will wait 60s for crictl version
	I1213 10:37:27.928789  359214 ssh_runner.go:195] Run: which crictl
	I1213 10:37:27.932637  359214 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:37:27.956729  359214 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 10:37:27.956786  359214 ssh_runner.go:195] Run: containerd --version
	I1213 10:37:27.979747  359214 ssh_runner.go:195] Run: containerd --version
	I1213 10:37:28.007018  359214 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 10:37:28.009973  359214 cli_runner.go:164] Run: docker network inspect functional-652709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:37:28.026979  359214 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:37:28.034215  359214 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 10:37:28.037114  359214 kubeadm.go:884] updating cluster {Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:37:28.037277  359214 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:37:28.037366  359214 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:37:28.069735  359214 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:37:28.069748  359214 containerd.go:534] Images already preloaded, skipping extraction
	I1213 10:37:28.069804  359214 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:37:28.094782  359214 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:37:28.094795  359214 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:37:28.094801  359214 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 10:37:28.094901  359214 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-652709 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:37:28.094963  359214 ssh_runner.go:195] Run: sudo crictl info
	I1213 10:37:28.123071  359214 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 10:37:28.123096  359214 cni.go:84] Creating CNI manager for ""
	I1213 10:37:28.123104  359214 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:37:28.123112  359214 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:37:28.123134  359214 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-652709 NodeName:functional-652709 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:37:28.123244  359214 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-652709"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:37:28.123313  359214 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:37:28.131175  359214 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:37:28.131238  359214 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:37:28.138792  359214 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 10:37:28.151537  359214 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:37:28.169495  359214 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1213 10:37:28.184364  359214 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:37:28.188525  359214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:37:28.305096  359214 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:37:28.912534  359214 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709 for IP: 192.168.49.2
	I1213 10:37:28.912575  359214 certs.go:195] generating shared ca certs ...
	I1213 10:37:28.912591  359214 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:37:28.912719  359214 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 10:37:28.912771  359214 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 10:37:28.912778  359214 certs.go:257] generating profile certs ...
	I1213 10:37:28.912857  359214 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.key
	I1213 10:37:28.912917  359214 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key.86e7afd1
	I1213 10:37:28.912954  359214 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key
	I1213 10:37:28.913063  359214 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 10:37:28.913092  359214 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 10:37:28.913099  359214 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:37:28.913124  359214 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:37:28.913151  359214 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:37:28.913174  359214 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 10:37:28.913221  359214 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 10:37:28.913808  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:37:28.931820  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:37:28.949028  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:37:28.966476  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:37:28.984047  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:37:29.002075  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:37:29.020305  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:37:29.037811  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:37:29.054630  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:37:29.071547  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 10:37:29.088633  359214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 10:37:29.105638  359214 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:37:29.118149  359214 ssh_runner.go:195] Run: openssl version
	I1213 10:37:29.124118  359214 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:37:29.131416  359214 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:37:29.138705  359214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:37:29.142329  359214 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:37:29.142388  359214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:37:29.183023  359214 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:37:29.190485  359214 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 10:37:29.197738  359214 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 10:37:29.205192  359214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 10:37:29.209070  359214 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 10:37:29.209124  359214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 10:37:29.250234  359214 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:37:29.257744  359214 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 10:37:29.265022  359214 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 10:37:29.272593  359214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 10:37:29.276820  359214 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 10:37:29.276874  359214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 10:37:29.317834  359214 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:37:29.325126  359214 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:37:29.328844  359214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:37:29.369639  359214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:37:29.410192  359214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:37:29.467336  359214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:37:29.508158  359214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:37:29.549013  359214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:37:29.589618  359214 kubeadm.go:401] StartCluster: {Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:37:29.589715  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 10:37:29.589775  359214 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:37:29.617382  359214 cri.go:89] found id: ""
	I1213 10:37:29.617441  359214 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:37:29.625150  359214 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:37:29.625165  359214 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:37:29.625217  359214 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:37:29.632536  359214 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:37:29.633037  359214 kubeconfig.go:125] found "functional-652709" server: "https://192.168.49.2:8441"
	I1213 10:37:29.635539  359214 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:37:29.643331  359214 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 10:22:52.033435592 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 10:37:28.181843120 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 10:37:29.643344  359214 kubeadm.go:1161] stopping kube-system containers ...
	I1213 10:37:29.643355  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1213 10:37:29.643418  359214 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:37:29.681117  359214 cri.go:89] found id: ""
	I1213 10:37:29.681185  359214 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 10:37:29.700348  359214 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:37:29.708464  359214 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 13 10:26 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 13 10:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 13 10:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 13 10:27 /etc/kubernetes/scheduler.conf
	
	I1213 10:37:29.708519  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:37:29.716973  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:37:29.724972  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:37:29.725027  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:37:29.732670  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:37:29.740374  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:37:29.740426  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:37:29.747796  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:37:29.755836  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:37:29.755895  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:37:29.763121  359214 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:37:29.770676  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:37:29.815944  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:37:31.022963  359214 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.206994632s)
	I1213 10:37:31.023029  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:37:31.239388  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:37:31.313712  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:37:31.358670  359214 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:37:31.358755  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:31.859658  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:32.358989  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:32.859540  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:33.359279  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:33.859755  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:34.358874  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:34.859660  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:35.358974  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:35.859781  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:36.359545  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:36.858931  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:37.359594  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:37.858997  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:38.359204  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:38.858979  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:39.358917  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:39.859473  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:40.359538  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:40.859107  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:41.358909  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:41.859704  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:42.359845  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:42.858940  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:43.359903  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:43.859817  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:44.359835  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:44.859527  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:45.359678  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:45.859496  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:46.359291  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:46.858996  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:47.358908  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:47.859899  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:48.358923  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:48.859520  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:49.358971  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:49.859614  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:50.359594  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:50.859684  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:51.359555  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:51.859532  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:52.359643  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:52.858959  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:53.359880  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:53.859709  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:54.359771  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:54.859730  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:55.359785  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:55.858870  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:56.359649  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:56.858975  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:57.358923  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:57.858974  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:58.359777  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:58.859581  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:59.359156  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:59.858896  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:00.358974  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:00.859820  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:01.359786  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:01.858901  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:02.359740  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:02.858926  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:03.359018  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:03.859003  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:04.358882  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:04.859861  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:05.358860  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:05.859819  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:06.358836  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:06.859844  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:07.359700  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:07.859637  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:08.358985  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:08.859911  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:09.358995  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:09.859620  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:10.359502  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:10.859134  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:11.358958  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:11.859244  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:12.359094  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:12.858981  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:13.359211  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:13.859751  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:14.358846  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:14.859594  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:15.358998  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:15.859726  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:16.358944  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:16.859375  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:17.358986  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:17.859765  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:18.358918  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:18.859799  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:19.359117  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:19.859388  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:20.359631  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:20.858965  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:21.358912  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:21.858871  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:22.359799  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:22.859665  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:23.359516  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:23.859788  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:24.359018  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:24.858866  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:25.359003  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:25.859726  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:26.358952  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:26.859653  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:27.359769  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:27.859360  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:28.358958  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:28.859685  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:29.359809  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:29.859773  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:30.359871  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:30.859558  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:31.359176  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:31.359252  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:31.383827  359214 cri.go:89] found id: ""
	I1213 10:38:31.383841  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.383849  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:31.383855  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:31.383917  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:31.412267  359214 cri.go:89] found id: ""
	I1213 10:38:31.412291  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.412300  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:31.412305  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:31.412364  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:31.437736  359214 cri.go:89] found id: ""
	I1213 10:38:31.437751  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.437758  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:31.437763  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:31.437824  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:31.461791  359214 cri.go:89] found id: ""
	I1213 10:38:31.461806  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.461813  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:31.461818  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:31.461880  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:31.488695  359214 cri.go:89] found id: ""
	I1213 10:38:31.488709  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.488717  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:31.488722  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:31.488789  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:31.517230  359214 cri.go:89] found id: ""
	I1213 10:38:31.517245  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.517274  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:31.517281  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:31.517340  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:31.541920  359214 cri.go:89] found id: ""
	I1213 10:38:31.541934  359214 logs.go:282] 0 containers: []
	W1213 10:38:31.541942  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:31.541951  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:31.541962  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:31.558143  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:31.558161  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:31.623427  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:31.614536   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.615101   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.616803   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.617190   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.619517   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:31.614536   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.615101   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.616803   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.617190   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.619517   10797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:31.623438  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:31.623449  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:31.686774  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:31.686794  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:31.719218  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:31.719234  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:34.280556  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:34.293171  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:34.293241  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:34.319161  359214 cri.go:89] found id: ""
	I1213 10:38:34.319176  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.319183  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:34.319189  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:34.319245  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:34.348792  359214 cri.go:89] found id: ""
	I1213 10:38:34.348806  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.348814  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:34.348819  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:34.348879  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:34.374794  359214 cri.go:89] found id: ""
	I1213 10:38:34.374809  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.374816  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:34.374822  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:34.374883  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:34.399481  359214 cri.go:89] found id: ""
	I1213 10:38:34.399496  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.399503  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:34.399509  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:34.399567  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:34.424169  359214 cri.go:89] found id: ""
	I1213 10:38:34.424184  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.424191  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:34.424196  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:34.424300  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:34.449747  359214 cri.go:89] found id: ""
	I1213 10:38:34.449762  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.449769  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:34.449775  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:34.449839  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:34.475244  359214 cri.go:89] found id: ""
	I1213 10:38:34.475259  359214 logs.go:282] 0 containers: []
	W1213 10:38:34.475266  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:34.475274  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:34.475284  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:34.531644  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:34.531665  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:34.548876  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:34.548895  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:34.612831  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:34.605081   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.605477   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.607131   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.607458   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.609038   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:34.605081   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.605477   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.607131   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.607458   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.609038   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:34.612842  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:34.612853  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:34.677588  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:34.677607  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:37.204561  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:37.215900  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:37.215960  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:37.240644  359214 cri.go:89] found id: ""
	I1213 10:38:37.240679  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.240697  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:37.240710  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:37.240796  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:37.265154  359214 cri.go:89] found id: ""
	I1213 10:38:37.265168  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.265176  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:37.265181  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:37.265240  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:37.290309  359214 cri.go:89] found id: ""
	I1213 10:38:37.290323  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.290331  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:37.290336  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:37.290402  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:37.314207  359214 cri.go:89] found id: ""
	I1213 10:38:37.314222  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.314229  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:37.314235  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:37.314294  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:37.338622  359214 cri.go:89] found id: ""
	I1213 10:38:37.338637  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.338645  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:37.338651  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:37.338731  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:37.362866  359214 cri.go:89] found id: ""
	I1213 10:38:37.362881  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.362888  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:37.362894  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:37.362954  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:37.388313  359214 cri.go:89] found id: ""
	I1213 10:38:37.388327  359214 logs.go:282] 0 containers: []
	W1213 10:38:37.388335  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:37.388343  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:37.388355  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:37.405018  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:37.405035  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:37.467928  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:37.459672   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.460192   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.461721   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.462120   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.463584   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:37.459672   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.460192   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.461721   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.462120   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.463584   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:37.467941  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:37.467952  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:37.536764  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:37.536793  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:37.565751  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:37.565767  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:40.124516  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:40.136075  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:40.136155  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:40.180740  359214 cri.go:89] found id: ""
	I1213 10:38:40.180755  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.180763  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:40.180771  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:40.180844  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:40.214880  359214 cri.go:89] found id: ""
	I1213 10:38:40.214894  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.214912  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:40.214918  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:40.214986  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:40.255502  359214 cri.go:89] found id: ""
	I1213 10:38:40.255516  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.255524  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:40.255529  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:40.255590  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:40.279736  359214 cri.go:89] found id: ""
	I1213 10:38:40.279750  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.279761  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:40.279766  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:40.279827  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:40.305162  359214 cri.go:89] found id: ""
	I1213 10:38:40.305186  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.305194  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:40.305199  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:40.305268  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:40.330075  359214 cri.go:89] found id: ""
	I1213 10:38:40.330089  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.330097  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:40.330103  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:40.330171  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:40.356608  359214 cri.go:89] found id: ""
	I1213 10:38:40.356623  359214 logs.go:282] 0 containers: []
	W1213 10:38:40.356631  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:40.356639  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:40.356649  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:40.386833  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:40.386850  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:40.442503  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:40.442523  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:40.458859  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:40.458875  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:40.526393  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:40.517849   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.518498   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.520192   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.520775   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.522583   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:40.517849   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.518498   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.520192   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.520775   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.522583   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:40.526415  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:40.526425  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:43.093725  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:43.104280  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:43.104351  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:43.128552  359214 cri.go:89] found id: ""
	I1213 10:38:43.128566  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.128574  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:43.128579  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:43.128637  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:43.153838  359214 cri.go:89] found id: ""
	I1213 10:38:43.153853  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.153861  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:43.153866  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:43.153925  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:43.182604  359214 cri.go:89] found id: ""
	I1213 10:38:43.182617  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.182624  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:43.182631  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:43.182751  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:43.212454  359214 cri.go:89] found id: ""
	I1213 10:38:43.212481  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.212489  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:43.212501  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:43.212572  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:43.239973  359214 cri.go:89] found id: ""
	I1213 10:38:43.239987  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.240005  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:43.240011  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:43.240074  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:43.264733  359214 cri.go:89] found id: ""
	I1213 10:38:43.264748  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.264755  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:43.264767  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:43.264826  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:43.291333  359214 cri.go:89] found id: ""
	I1213 10:38:43.291347  359214 logs.go:282] 0 containers: []
	W1213 10:38:43.291354  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:43.291362  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:43.291372  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:43.348037  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:43.348057  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:43.364359  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:43.364377  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:43.426788  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:43.418519   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.419245   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.420917   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.421479   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.423061   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:43.418519   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.419245   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.420917   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.421479   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.423061   11215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:43.426809  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:43.426819  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:43.492237  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:43.492258  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:46.019179  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:46.029376  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:46.029454  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:46.053215  359214 cri.go:89] found id: ""
	I1213 10:38:46.053229  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.053236  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:46.053242  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:46.053315  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:46.078867  359214 cri.go:89] found id: ""
	I1213 10:38:46.078882  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.078889  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:46.078895  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:46.078955  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:46.104476  359214 cri.go:89] found id: ""
	I1213 10:38:46.104490  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.104498  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:46.104503  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:46.104584  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:46.132735  359214 cri.go:89] found id: ""
	I1213 10:38:46.132750  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.132758  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:46.132763  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:46.132844  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:46.171837  359214 cri.go:89] found id: ""
	I1213 10:38:46.171852  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.171859  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:46.171865  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:46.171925  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:46.214470  359214 cri.go:89] found id: ""
	I1213 10:38:46.214484  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.214501  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:46.214508  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:46.214581  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:46.241616  359214 cri.go:89] found id: ""
	I1213 10:38:46.241631  359214 logs.go:282] 0 containers: []
	W1213 10:38:46.241638  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:46.241646  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:46.241657  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:46.269691  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:46.269717  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:46.326434  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:46.326454  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:46.342808  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:46.342825  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:46.406446  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:46.398462   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.399218   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.400888   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.401204   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.402682   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:46.398462   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.399218   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.400888   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.401204   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.402682   11336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:46.406456  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:46.406466  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:48.970215  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:48.980360  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:48.980424  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:49.007836  359214 cri.go:89] found id: ""
	I1213 10:38:49.007857  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.007865  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:49.007870  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:49.007930  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:49.032102  359214 cri.go:89] found id: ""
	I1213 10:38:49.032116  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.032124  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:49.032129  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:49.032188  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:49.056548  359214 cri.go:89] found id: ""
	I1213 10:38:49.056562  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.056577  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:49.056582  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:49.056638  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:49.080172  359214 cri.go:89] found id: ""
	I1213 10:38:49.080186  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.080194  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:49.080199  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:49.080257  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:49.104358  359214 cri.go:89] found id: ""
	I1213 10:38:49.104372  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.104380  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:49.104385  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:49.104456  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:49.131026  359214 cri.go:89] found id: ""
	I1213 10:38:49.131041  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.131048  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:49.131054  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:49.131111  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:49.155850  359214 cri.go:89] found id: ""
	I1213 10:38:49.155865  359214 logs.go:282] 0 containers: []
	W1213 10:38:49.155872  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:49.155881  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:49.155891  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:49.237398  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:49.228981   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.229481   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.231324   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.231926   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.233542   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:49.228981   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.229481   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.231324   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.231926   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.233542   11424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:49.237409  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:49.237422  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:49.300000  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:49.300020  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:49.330957  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:49.330973  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:49.392815  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:49.392834  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:51.909143  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:51.919406  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:51.919465  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:51.948136  359214 cri.go:89] found id: ""
	I1213 10:38:51.948150  359214 logs.go:282] 0 containers: []
	W1213 10:38:51.948157  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:51.948163  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:51.948221  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:51.972396  359214 cri.go:89] found id: ""
	I1213 10:38:51.972411  359214 logs.go:282] 0 containers: []
	W1213 10:38:51.972420  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:51.972424  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:51.972497  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:52.003416  359214 cri.go:89] found id: ""
	I1213 10:38:52.003433  359214 logs.go:282] 0 containers: []
	W1213 10:38:52.003442  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:52.003449  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:52.003533  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:52.031359  359214 cri.go:89] found id: ""
	I1213 10:38:52.031374  359214 logs.go:282] 0 containers: []
	W1213 10:38:52.031382  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:52.031387  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:52.031447  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:52.056514  359214 cri.go:89] found id: ""
	I1213 10:38:52.056529  359214 logs.go:282] 0 containers: []
	W1213 10:38:52.056536  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:52.056541  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:52.056619  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:52.085509  359214 cri.go:89] found id: ""
	I1213 10:38:52.085524  359214 logs.go:282] 0 containers: []
	W1213 10:38:52.085533  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:52.085539  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:52.085613  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:52.113117  359214 cri.go:89] found id: ""
	I1213 10:38:52.113131  359214 logs.go:282] 0 containers: []
	W1213 10:38:52.113138  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:52.113146  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:52.113157  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:52.129605  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:52.129627  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:52.198531  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:52.190917   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.191383   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.192873   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.193169   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.194579   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:52.190917   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.191383   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.192873   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.193169   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.194579   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:52.198542  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:52.198554  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:52.267617  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:52.267640  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:52.301362  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:52.301379  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:54.858319  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:54.868860  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:54.868931  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:54.895935  359214 cri.go:89] found id: ""
	I1213 10:38:54.895949  359214 logs.go:282] 0 containers: []
	W1213 10:38:54.895956  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:54.895962  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:54.896020  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:54.924712  359214 cri.go:89] found id: ""
	I1213 10:38:54.924727  359214 logs.go:282] 0 containers: []
	W1213 10:38:54.924734  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:54.924740  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:54.924807  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:54.949662  359214 cri.go:89] found id: ""
	I1213 10:38:54.949677  359214 logs.go:282] 0 containers: []
	W1213 10:38:54.949685  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:54.949690  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:54.949758  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:54.973861  359214 cri.go:89] found id: ""
	I1213 10:38:54.973876  359214 logs.go:282] 0 containers: []
	W1213 10:38:54.973883  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:54.973889  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:54.973949  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:54.999167  359214 cri.go:89] found id: ""
	I1213 10:38:54.999182  359214 logs.go:282] 0 containers: []
	W1213 10:38:54.999190  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:54.999196  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:54.999267  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:55.030614  359214 cri.go:89] found id: ""
	I1213 10:38:55.030630  359214 logs.go:282] 0 containers: []
	W1213 10:38:55.030638  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:55.030644  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:55.030764  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:55.059903  359214 cri.go:89] found id: ""
	I1213 10:38:55.059918  359214 logs.go:282] 0 containers: []
	W1213 10:38:55.059925  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:55.059933  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:55.059943  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:55.129097  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:55.129156  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:55.157699  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:55.157717  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:55.226688  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:55.226706  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:55.244093  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:55.244111  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:55.309464  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:55.300977   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.301803   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.303423   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.304086   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.305672   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:55.300977   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.301803   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.303423   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.304086   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.305672   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:57.809736  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:57.819959  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:57.820025  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:57.844184  359214 cri.go:89] found id: ""
	I1213 10:38:57.844198  359214 logs.go:282] 0 containers: []
	W1213 10:38:57.844206  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:57.844211  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:38:57.844270  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:57.869511  359214 cri.go:89] found id: ""
	I1213 10:38:57.869524  359214 logs.go:282] 0 containers: []
	W1213 10:38:57.869532  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:38:57.869553  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:38:57.869613  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:57.895212  359214 cri.go:89] found id: ""
	I1213 10:38:57.895226  359214 logs.go:282] 0 containers: []
	W1213 10:38:57.895234  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:38:57.895239  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:57.895298  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:57.919989  359214 cri.go:89] found id: ""
	I1213 10:38:57.920004  359214 logs.go:282] 0 containers: []
	W1213 10:38:57.920011  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:57.920018  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:57.920076  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:57.948250  359214 cri.go:89] found id: ""
	I1213 10:38:57.948263  359214 logs.go:282] 0 containers: []
	W1213 10:38:57.948271  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:57.948277  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:57.948334  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:57.974322  359214 cri.go:89] found id: ""
	I1213 10:38:57.974337  359214 logs.go:282] 0 containers: []
	W1213 10:38:57.974345  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:57.974350  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:57.974423  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:58.005721  359214 cri.go:89] found id: ""
	I1213 10:38:58.005737  359214 logs.go:282] 0 containers: []
	W1213 10:38:58.005747  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:58.005757  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:58.005768  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:58.064186  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:58.064207  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:58.080907  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:58.080924  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:58.146147  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:58.137210   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.137944   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.139692   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.140402   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.141981   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:58.137210   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.137944   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.139692   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.140402   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.141981   11739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:58.146159  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:38:58.146170  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:38:58.214235  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:38:58.214253  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:00.744729  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:00.755028  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:00.755086  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:00.780193  359214 cri.go:89] found id: ""
	I1213 10:39:00.780207  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.780215  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:00.780221  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:00.780293  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:00.806094  359214 cri.go:89] found id: ""
	I1213 10:39:00.806109  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.806116  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:00.806123  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:00.806190  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:00.830215  359214 cri.go:89] found id: ""
	I1213 10:39:00.830229  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.830236  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:00.830241  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:00.830298  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:00.858553  359214 cri.go:89] found id: ""
	I1213 10:39:00.858567  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.858575  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:00.858581  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:00.858638  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:00.883276  359214 cri.go:89] found id: ""
	I1213 10:39:00.883290  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.883298  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:00.883304  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:00.883366  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:00.908199  359214 cri.go:89] found id: ""
	I1213 10:39:00.908214  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.908222  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:00.908235  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:00.908292  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:00.933487  359214 cri.go:89] found id: ""
	I1213 10:39:00.933502  359214 logs.go:282] 0 containers: []
	W1213 10:39:00.933510  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:00.933518  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:00.933529  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:00.999819  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:00.990764   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.991604   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.993277   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.993599   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.995238   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:00.990764   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.991604   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.993277   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.993599   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:00.995238   11836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:00.999831  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:00.999851  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:01.070347  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:01.070376  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:01.099348  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:01.099367  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:01.160766  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:01.160789  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:03.683134  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:03.693419  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:03.693479  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:03.724358  359214 cri.go:89] found id: ""
	I1213 10:39:03.724373  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.724380  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:03.724386  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:03.724446  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:03.749342  359214 cri.go:89] found id: ""
	I1213 10:39:03.749357  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.749365  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:03.749370  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:03.749428  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:03.777066  359214 cri.go:89] found id: ""
	I1213 10:39:03.777081  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.777088  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:03.777094  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:03.777153  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:03.802375  359214 cri.go:89] found id: ""
	I1213 10:39:03.802390  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.802397  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:03.802405  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:03.802463  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:03.828597  359214 cri.go:89] found id: ""
	I1213 10:39:03.828613  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.828620  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:03.828626  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:03.828688  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:03.854166  359214 cri.go:89] found id: ""
	I1213 10:39:03.854187  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.854195  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:03.854201  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:03.854261  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:03.879516  359214 cri.go:89] found id: ""
	I1213 10:39:03.879533  359214 logs.go:282] 0 containers: []
	W1213 10:39:03.879540  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:03.879549  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:03.879559  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:03.936679  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:03.936700  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:03.953300  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:03.953317  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:04.029874  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:04.020037   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.021068   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.022008   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.023857   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.024567   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:04.020037   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.021068   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.022008   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.023857   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.024567   11948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:04.029886  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:04.029896  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:04.097622  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:04.097643  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:06.630848  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:06.641568  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:06.641629  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:06.667996  359214 cri.go:89] found id: ""
	I1213 10:39:06.668011  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.668019  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:06.668024  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:06.668090  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:06.697263  359214 cri.go:89] found id: ""
	I1213 10:39:06.697278  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.697293  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:06.697299  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:06.697359  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:06.722757  359214 cri.go:89] found id: ""
	I1213 10:39:06.722772  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.722780  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:06.722785  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:06.722844  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:06.746758  359214 cri.go:89] found id: ""
	I1213 10:39:06.746772  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.746780  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:06.746786  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:06.746845  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:06.775078  359214 cri.go:89] found id: ""
	I1213 10:39:06.775093  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.775100  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:06.775105  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:06.775164  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:06.800898  359214 cri.go:89] found id: ""
	I1213 10:39:06.800914  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.800921  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:06.800926  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:06.800983  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:06.829594  359214 cri.go:89] found id: ""
	I1213 10:39:06.829624  359214 logs.go:282] 0 containers: []
	W1213 10:39:06.829648  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:06.829656  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:06.829666  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:06.893293  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:06.893314  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:06.921544  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:06.921562  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:06.981949  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:06.981969  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:06.998794  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:06.998816  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:07.067966  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:07.059691   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.060374   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.061914   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.062229   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.063682   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:07.059691   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.060374   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.061914   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.062229   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.063682   12061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:09.568245  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:09.578515  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:09.578574  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:09.604486  359214 cri.go:89] found id: ""
	I1213 10:39:09.604500  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.604507  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:09.604512  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:09.604572  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:09.628878  359214 cri.go:89] found id: ""
	I1213 10:39:09.628894  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.628902  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:09.628912  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:09.628971  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:09.654182  359214 cri.go:89] found id: ""
	I1213 10:39:09.654196  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.654204  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:09.654209  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:09.654268  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:09.679850  359214 cri.go:89] found id: ""
	I1213 10:39:09.679864  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.679871  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:09.679877  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:09.679937  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:09.708630  359214 cri.go:89] found id: ""
	I1213 10:39:09.708644  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.708651  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:09.708657  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:09.708716  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:09.732554  359214 cri.go:89] found id: ""
	I1213 10:39:09.732568  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.732575  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:09.732581  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:09.732642  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:09.757631  359214 cri.go:89] found id: ""
	I1213 10:39:09.757646  359214 logs.go:282] 0 containers: []
	W1213 10:39:09.757654  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:09.757663  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:09.757674  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:09.816181  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:09.816203  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:09.832514  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:09.832531  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:09.897359  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:09.888543   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.889254   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.891102   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.891693   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.893450   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:09.888543   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.889254   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.891102   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.891693   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:09.893450   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:09.897369  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:09.897379  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:09.960943  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:09.960964  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:12.490984  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:12.501823  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:12.501893  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:12.532332  359214 cri.go:89] found id: ""
	I1213 10:39:12.532347  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.532354  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:12.532359  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:12.532419  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:12.558457  359214 cri.go:89] found id: ""
	I1213 10:39:12.558471  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.558479  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:12.558485  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:12.558545  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:12.585075  359214 cri.go:89] found id: ""
	I1213 10:39:12.585089  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.585097  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:12.585102  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:12.585160  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:12.614401  359214 cri.go:89] found id: ""
	I1213 10:39:12.614415  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.614422  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:12.614428  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:12.614486  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:12.639152  359214 cri.go:89] found id: ""
	I1213 10:39:12.639166  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.639173  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:12.639179  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:12.639240  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:12.667593  359214 cri.go:89] found id: ""
	I1213 10:39:12.667607  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.667614  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:12.667620  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:12.667681  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:12.691984  359214 cri.go:89] found id: ""
	I1213 10:39:12.691997  359214 logs.go:282] 0 containers: []
	W1213 10:39:12.692005  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:12.692013  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:12.692024  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:12.756546  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:12.748299   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.748690   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.750244   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.750570   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.752183   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:12.748299   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.748690   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.750244   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.750570   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:12.752183   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:12.756556  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:12.756567  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:12.820864  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:12.820885  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:12.853253  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:12.853289  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:12.911659  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:12.911678  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:15.427988  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:15.439459  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:15.439523  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:15.476834  359214 cri.go:89] found id: ""
	I1213 10:39:15.476849  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.476856  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:15.476862  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:15.476926  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:15.501586  359214 cri.go:89] found id: ""
	I1213 10:39:15.501601  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.501609  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:15.501614  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:15.501675  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:15.526367  359214 cri.go:89] found id: ""
	I1213 10:39:15.526381  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.526399  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:15.526406  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:15.526473  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:15.551126  359214 cri.go:89] found id: ""
	I1213 10:39:15.551141  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.551148  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:15.551154  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:15.551209  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:15.576958  359214 cri.go:89] found id: ""
	I1213 10:39:15.576973  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.576990  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:15.576996  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:15.577062  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:15.601287  359214 cri.go:89] found id: ""
	I1213 10:39:15.601300  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.601308  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:15.601313  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:15.601371  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:15.628822  359214 cri.go:89] found id: ""
	I1213 10:39:15.628837  359214 logs.go:282] 0 containers: []
	W1213 10:39:15.628844  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:15.628852  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:15.628862  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:15.644985  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:15.645002  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:15.711548  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:15.703095   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.703681   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.705285   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.705963   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.707559   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:15.703095   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.703681   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.705285   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.705963   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:15.707559   12361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:15.711559  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:15.711571  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:15.775011  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:15.775031  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:15.802522  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:15.802545  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:18.359921  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:18.369925  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:18.369992  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:18.393448  359214 cri.go:89] found id: ""
	I1213 10:39:18.393462  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.393470  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:18.393476  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:18.393532  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:18.426863  359214 cri.go:89] found id: ""
	I1213 10:39:18.426876  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.426884  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:18.426889  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:18.426946  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:18.472251  359214 cri.go:89] found id: ""
	I1213 10:39:18.472264  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.472272  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:18.472277  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:18.472333  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:18.500412  359214 cri.go:89] found id: ""
	I1213 10:39:18.500427  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.500434  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:18.500440  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:18.500500  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:18.524823  359214 cri.go:89] found id: ""
	I1213 10:39:18.524837  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.524845  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:18.524850  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:18.524908  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:18.549332  359214 cri.go:89] found id: ""
	I1213 10:39:18.549346  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.549354  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:18.549359  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:18.549417  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:18.577251  359214 cri.go:89] found id: ""
	I1213 10:39:18.577271  359214 logs.go:282] 0 containers: []
	W1213 10:39:18.577279  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:18.577287  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:18.577299  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:18.639510  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:18.639530  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:18.677762  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:18.677777  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:18.737061  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:18.737080  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:18.753422  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:18.753439  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:18.823128  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:18.814301   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.815633   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.816172   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.817539   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.818059   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:18.814301   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.815633   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.816172   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.817539   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:18.818059   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:21.323418  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:21.333772  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:21.333833  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:21.368103  359214 cri.go:89] found id: ""
	I1213 10:39:21.368118  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.368125  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:21.368131  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:21.368188  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:21.392848  359214 cri.go:89] found id: ""
	I1213 10:39:21.392862  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.392870  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:21.392875  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:21.392932  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:21.426067  359214 cri.go:89] found id: ""
	I1213 10:39:21.426082  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.426089  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:21.426094  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:21.426153  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:21.453497  359214 cri.go:89] found id: ""
	I1213 10:39:21.453521  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.453529  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:21.453535  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:21.453600  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:21.486155  359214 cri.go:89] found id: ""
	I1213 10:39:21.486170  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.486187  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:21.486193  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:21.486262  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:21.512133  359214 cri.go:89] found id: ""
	I1213 10:39:21.512148  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.512155  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:21.512161  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:21.512219  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:21.536909  359214 cri.go:89] found id: ""
	I1213 10:39:21.536925  359214 logs.go:282] 0 containers: []
	W1213 10:39:21.536932  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:21.536940  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:21.536951  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:21.564635  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:21.564651  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:21.621861  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:21.621882  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:21.638280  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:21.638297  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:21.706649  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:21.698160   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.698774   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.700554   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.701257   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.702523   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:21.698160   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.698774   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.700554   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.701257   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.702523   12585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:21.706660  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:21.706678  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:24.270851  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:24.281891  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:24.281959  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:24.306887  359214 cri.go:89] found id: ""
	I1213 10:39:24.306902  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.306910  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:24.306916  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:24.306989  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:24.330995  359214 cri.go:89] found id: ""
	I1213 10:39:24.331009  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.331018  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:24.331023  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:24.331079  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:24.358824  359214 cri.go:89] found id: ""
	I1213 10:39:24.358838  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.358845  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:24.358850  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:24.358907  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:24.383545  359214 cri.go:89] found id: ""
	I1213 10:39:24.383559  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.383566  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:24.383572  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:24.383628  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:24.407288  359214 cri.go:89] found id: ""
	I1213 10:39:24.407302  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.407309  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:24.407315  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:24.407374  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:24.441689  359214 cri.go:89] found id: ""
	I1213 10:39:24.441703  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.441720  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:24.441727  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:24.441796  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:24.469372  359214 cri.go:89] found id: ""
	I1213 10:39:24.469387  359214 logs.go:282] 0 containers: []
	W1213 10:39:24.469394  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:24.469402  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:24.469418  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:24.529071  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:24.529091  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:24.545770  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:24.545786  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:24.619385  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:24.610753   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.611526   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.613120   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.613552   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.615328   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:24.610753   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.611526   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.613120   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.613552   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.615328   12679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:24.619395  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:24.619406  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:24.683002  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:24.683029  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:27.214048  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:27.223825  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:27.223885  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:27.249091  359214 cri.go:89] found id: ""
	I1213 10:39:27.249106  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.249114  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:27.249120  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:27.249175  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:27.274216  359214 cri.go:89] found id: ""
	I1213 10:39:27.274231  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.274238  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:27.274243  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:27.274301  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:27.306051  359214 cri.go:89] found id: ""
	I1213 10:39:27.306068  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.306076  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:27.306081  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:27.306162  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:27.329993  359214 cri.go:89] found id: ""
	I1213 10:39:27.330015  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.330022  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:27.330027  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:27.330084  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:27.357738  359214 cri.go:89] found id: ""
	I1213 10:39:27.357759  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.357766  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:27.357772  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:27.357829  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:27.383932  359214 cri.go:89] found id: ""
	I1213 10:39:27.383948  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.383955  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:27.383960  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:27.384021  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:27.408273  359214 cri.go:89] found id: ""
	I1213 10:39:27.408298  359214 logs.go:282] 0 containers: []
	W1213 10:39:27.408306  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:27.408314  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:27.408324  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:27.473400  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:27.473421  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:27.490562  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:27.490580  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:27.560540  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:27.551714   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.552445   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.554637   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.555366   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.556555   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:27.551714   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.552445   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.554637   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.555366   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.556555   12785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:27.560551  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:27.560562  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:27.623676  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:27.623700  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:30.153068  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:30.164672  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:30.164745  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:30.192223  359214 cri.go:89] found id: ""
	I1213 10:39:30.192239  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.192248  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:30.192254  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:30.192336  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:30.224222  359214 cri.go:89] found id: ""
	I1213 10:39:30.224237  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.224245  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:30.224251  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:30.224319  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:30.250132  359214 cri.go:89] found id: ""
	I1213 10:39:30.250148  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.250156  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:30.250161  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:30.250232  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:30.278166  359214 cri.go:89] found id: ""
	I1213 10:39:30.278182  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.278199  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:30.278205  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:30.278271  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:30.304028  359214 cri.go:89] found id: ""
	I1213 10:39:30.304043  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.304050  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:30.304055  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:30.304112  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:30.328660  359214 cri.go:89] found id: ""
	I1213 10:39:30.328675  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.328693  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:30.328699  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:30.328767  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:30.352850  359214 cri.go:89] found id: ""
	I1213 10:39:30.352865  359214 logs.go:282] 0 containers: []
	W1213 10:39:30.352877  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:30.352886  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:30.352896  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:30.408893  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:30.408912  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:30.428762  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:30.428779  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:30.500428  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:30.492113   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.492871   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.494609   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.495292   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.496285   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:30.492113   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.492871   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.494609   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.495292   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.496285   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:30.500438  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:30.500449  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:30.563541  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:30.563560  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:33.092955  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:33.103393  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:33.103457  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:33.128626  359214 cri.go:89] found id: ""
	I1213 10:39:33.128640  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.128647  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:33.128653  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:33.128709  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:33.156533  359214 cri.go:89] found id: ""
	I1213 10:39:33.156548  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.156555  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:33.156561  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:33.156631  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:33.181965  359214 cri.go:89] found id: ""
	I1213 10:39:33.181979  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.181987  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:33.181992  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:33.182066  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:33.210753  359214 cri.go:89] found id: ""
	I1213 10:39:33.210767  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.210775  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:33.210780  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:33.210846  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:33.236369  359214 cri.go:89] found id: ""
	I1213 10:39:33.236384  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.236391  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:33.236396  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:33.236453  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:33.261374  359214 cri.go:89] found id: ""
	I1213 10:39:33.261390  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.261397  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:33.261403  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:33.261476  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:33.286480  359214 cri.go:89] found id: ""
	I1213 10:39:33.286496  359214 logs.go:282] 0 containers: []
	W1213 10:39:33.286512  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:33.286536  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:33.286547  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:33.344247  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:33.344268  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:33.362163  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:33.362178  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:33.431331  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:33.423097   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.423938   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.425571   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.425890   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.427375   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:33.423097   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.423938   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.425571   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.425890   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.427375   12996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:33.431340  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:33.431351  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:33.514221  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:33.514250  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:36.043055  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:36.053301  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:36.053366  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:36.078047  359214 cri.go:89] found id: ""
	I1213 10:39:36.078061  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.078069  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:36.078074  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:36.078135  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:36.104994  359214 cri.go:89] found id: ""
	I1213 10:39:36.105009  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.105017  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:36.105022  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:36.105083  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:36.138243  359214 cri.go:89] found id: ""
	I1213 10:39:36.138257  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.138264  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:36.138270  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:36.138331  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:36.163657  359214 cri.go:89] found id: ""
	I1213 10:39:36.163672  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.163679  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:36.163685  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:36.163744  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:36.192631  359214 cri.go:89] found id: ""
	I1213 10:39:36.192646  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.192653  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:36.192658  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:36.192715  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:36.217613  359214 cri.go:89] found id: ""
	I1213 10:39:36.217626  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.217634  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:36.217641  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:36.217699  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:36.242973  359214 cri.go:89] found id: ""
	I1213 10:39:36.242988  359214 logs.go:282] 0 containers: []
	W1213 10:39:36.242995  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:36.243004  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:36.243015  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:36.299822  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:36.299843  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:36.316930  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:36.316947  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:36.384839  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:36.376386   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.377339   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.379075   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.379670   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.381017   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:36.376386   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.377339   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.379075   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.379670   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.381017   13100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:36.384850  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:36.384860  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:36.453800  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:36.453820  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:38.992805  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:39.004323  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:39.004395  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:39.029542  359214 cri.go:89] found id: ""
	I1213 10:39:39.029556  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.029564  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:39.029569  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:39.029634  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:39.058191  359214 cri.go:89] found id: ""
	I1213 10:39:39.058205  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.058212  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:39.058217  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:39.058278  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:39.082506  359214 cri.go:89] found id: ""
	I1213 10:39:39.082520  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.082527  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:39.082532  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:39.082588  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:39.107708  359214 cri.go:89] found id: ""
	I1213 10:39:39.107722  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.107729  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:39.107735  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:39.107795  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:39.134092  359214 cri.go:89] found id: ""
	I1213 10:39:39.134106  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.134114  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:39.134119  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:39.134176  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:39.159493  359214 cri.go:89] found id: ""
	I1213 10:39:39.159508  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.159516  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:39.159521  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:39.159586  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:39.185250  359214 cri.go:89] found id: ""
	I1213 10:39:39.185270  359214 logs.go:282] 0 containers: []
	W1213 10:39:39.185278  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:39.185285  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:39.185296  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:39.212945  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:39.212964  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:39.270421  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:39.270441  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:39.287465  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:39.287483  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:39.353697  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:39.344780   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.345537   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.347125   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.347630   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.349255   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:39.344780   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.345537   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.347125   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.347630   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.349255   13216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:39.353707  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:39.353719  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:41.923052  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:41.933314  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:41.933380  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:41.957979  359214 cri.go:89] found id: ""
	I1213 10:39:41.957994  359214 logs.go:282] 0 containers: []
	W1213 10:39:41.958001  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:41.958006  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:41.958063  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:41.982504  359214 cri.go:89] found id: ""
	I1213 10:39:41.982519  359214 logs.go:282] 0 containers: []
	W1213 10:39:41.982527  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:41.982532  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:41.982594  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:42.034066  359214 cri.go:89] found id: ""
	I1213 10:39:42.034090  359214 logs.go:282] 0 containers: []
	W1213 10:39:42.034098  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:42.034103  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:42.034170  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:42.060660  359214 cri.go:89] found id: ""
	I1213 10:39:42.060675  359214 logs.go:282] 0 containers: []
	W1213 10:39:42.060682  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:42.060688  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:42.060760  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:42.089100  359214 cri.go:89] found id: ""
	I1213 10:39:42.089116  359214 logs.go:282] 0 containers: []
	W1213 10:39:42.089125  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:42.089131  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:42.089206  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:42.124357  359214 cri.go:89] found id: ""
	I1213 10:39:42.124373  359214 logs.go:282] 0 containers: []
	W1213 10:39:42.124382  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:42.124388  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:42.124457  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:42.154537  359214 cri.go:89] found id: ""
	I1213 10:39:42.154552  359214 logs.go:282] 0 containers: []
	W1213 10:39:42.154560  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:42.154568  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:42.154580  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:42.236098  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:42.226374   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.227371   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.229046   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.229696   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.231326   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:42.226374   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.227371   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.229046   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.229696   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.231326   13308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:42.236116  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:42.236128  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:42.301179  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:42.301201  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:42.331860  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:42.331876  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:42.389580  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:42.389599  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:44.907943  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:44.917971  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:44.918030  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:44.944860  359214 cri.go:89] found id: ""
	I1213 10:39:44.944876  359214 logs.go:282] 0 containers: []
	W1213 10:39:44.944883  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:44.944889  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:44.944947  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:44.969171  359214 cri.go:89] found id: ""
	I1213 10:39:44.969185  359214 logs.go:282] 0 containers: []
	W1213 10:39:44.969192  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:44.969197  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:44.969274  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:44.993953  359214 cri.go:89] found id: ""
	I1213 10:39:44.993968  359214 logs.go:282] 0 containers: []
	W1213 10:39:44.993975  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:44.993980  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:44.994036  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:45.047270  359214 cri.go:89] found id: ""
	I1213 10:39:45.047286  359214 logs.go:282] 0 containers: []
	W1213 10:39:45.047295  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:45.047308  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:45.047383  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:45.081157  359214 cri.go:89] found id: ""
	I1213 10:39:45.081173  359214 logs.go:282] 0 containers: []
	W1213 10:39:45.081182  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:45.081189  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:45.081275  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:45.121621  359214 cri.go:89] found id: ""
	I1213 10:39:45.121638  359214 logs.go:282] 0 containers: []
	W1213 10:39:45.121646  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:45.121652  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:45.121723  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:45.178070  359214 cri.go:89] found id: ""
	I1213 10:39:45.178087  359214 logs.go:282] 0 containers: []
	W1213 10:39:45.178095  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:45.178105  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:45.178117  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:45.242653  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:45.242715  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:45.312989  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:45.313030  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:45.333875  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:45.333893  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:45.402702  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:45.394811   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.395310   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.396989   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.397342   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.398810   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:45.394811   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.395310   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.396989   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.397342   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.398810   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:45.402713  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:45.402724  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:47.974092  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:47.984508  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:47.984581  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:48.011411  359214 cri.go:89] found id: ""
	I1213 10:39:48.011427  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.011434  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:48.011440  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:48.011500  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:48.037430  359214 cri.go:89] found id: ""
	I1213 10:39:48.037445  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.037464  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:48.037470  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:48.037541  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:48.068968  359214 cri.go:89] found id: ""
	I1213 10:39:48.068982  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.068989  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:48.068994  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:48.069053  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:48.093935  359214 cri.go:89] found id: ""
	I1213 10:39:48.093949  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.093966  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:48.093982  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:48.094054  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:48.118617  359214 cri.go:89] found id: ""
	I1213 10:39:48.118631  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.118647  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:48.118653  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:48.118742  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:48.147778  359214 cri.go:89] found id: ""
	I1213 10:39:48.147792  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.147802  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:48.147807  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:48.147866  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:48.171531  359214 cri.go:89] found id: ""
	I1213 10:39:48.171546  359214 logs.go:282] 0 containers: []
	W1213 10:39:48.171553  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:48.171562  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:48.171572  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:48.228511  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:48.228531  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:48.244723  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:48.244738  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:48.313285  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:48.305099   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.305923   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.307541   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.308109   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.309223   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:48.305099   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.305923   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.307541   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.308109   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.309223   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:48.313296  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:48.313307  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:48.374383  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:48.374405  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:50.902721  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:50.912675  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:50.912735  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:50.936964  359214 cri.go:89] found id: ""
	I1213 10:39:50.936978  359214 logs.go:282] 0 containers: []
	W1213 10:39:50.936986  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:50.936991  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:50.937050  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:50.960978  359214 cri.go:89] found id: ""
	I1213 10:39:50.960991  359214 logs.go:282] 0 containers: []
	W1213 10:39:50.960999  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:50.961004  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:50.961060  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:50.985441  359214 cri.go:89] found id: ""
	I1213 10:39:50.985455  359214 logs.go:282] 0 containers: []
	W1213 10:39:50.985462  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:50.985467  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:50.985524  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:51.012305  359214 cri.go:89] found id: ""
	I1213 10:39:51.012320  359214 logs.go:282] 0 containers: []
	W1213 10:39:51.012327  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:51.012333  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:51.012394  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:51.037844  359214 cri.go:89] found id: ""
	I1213 10:39:51.037858  359214 logs.go:282] 0 containers: []
	W1213 10:39:51.037865  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:51.037871  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:51.037930  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:51.062094  359214 cri.go:89] found id: ""
	I1213 10:39:51.062108  359214 logs.go:282] 0 containers: []
	W1213 10:39:51.062115  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:51.062121  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:51.062178  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:51.087816  359214 cri.go:89] found id: ""
	I1213 10:39:51.087831  359214 logs.go:282] 0 containers: []
	W1213 10:39:51.087839  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:51.087848  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:51.087860  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:51.144441  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:51.144462  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:51.161532  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:51.161551  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:51.232639  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:51.224130   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.224905   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.226632   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.227272   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.228817   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:51.224130   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.224905   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.226632   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.227272   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.228817   13628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:51.232650  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:51.232662  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:51.300854  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:51.300877  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:53.830183  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:53.840765  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:53.840829  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:53.867482  359214 cri.go:89] found id: ""
	I1213 10:39:53.867497  359214 logs.go:282] 0 containers: []
	W1213 10:39:53.867504  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:53.867510  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:53.867572  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:53.896830  359214 cri.go:89] found id: ""
	I1213 10:39:53.896844  359214 logs.go:282] 0 containers: []
	W1213 10:39:53.896852  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:53.896857  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:53.896921  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:53.921163  359214 cri.go:89] found id: ""
	I1213 10:39:53.921177  359214 logs.go:282] 0 containers: []
	W1213 10:39:53.921185  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:53.921190  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:53.921247  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:53.947006  359214 cri.go:89] found id: ""
	I1213 10:39:53.947020  359214 logs.go:282] 0 containers: []
	W1213 10:39:53.947027  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:53.947033  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:53.947089  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:53.971965  359214 cri.go:89] found id: ""
	I1213 10:39:53.971979  359214 logs.go:282] 0 containers: []
	W1213 10:39:53.971986  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:53.971992  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:53.972050  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:53.996770  359214 cri.go:89] found id: ""
	I1213 10:39:53.996785  359214 logs.go:282] 0 containers: []
	W1213 10:39:53.996792  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:53.996797  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:53.996856  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:54.029511  359214 cri.go:89] found id: ""
	I1213 10:39:54.029526  359214 logs.go:282] 0 containers: []
	W1213 10:39:54.029534  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:54.029542  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:54.029553  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:54.063523  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:54.063540  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:54.120600  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:54.120624  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:54.136821  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:54.136839  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:54.210067  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:54.201894   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.202593   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.204122   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.204658   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.206168   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:54.201894   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.202593   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.204122   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.204658   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.206168   13746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:54.210077  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:54.210087  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:56.773483  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:56.783689  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:56.783766  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:56.808277  359214 cri.go:89] found id: ""
	I1213 10:39:56.808291  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.808299  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:56.808304  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:56.808368  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:56.832949  359214 cri.go:89] found id: ""
	I1213 10:39:56.832963  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.832970  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:56.832976  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:56.833036  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:56.858222  359214 cri.go:89] found id: ""
	I1213 10:39:56.858236  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.858250  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:56.858255  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:56.858313  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:56.886516  359214 cri.go:89] found id: ""
	I1213 10:39:56.886531  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.886538  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:56.886543  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:56.886599  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:56.916534  359214 cri.go:89] found id: ""
	I1213 10:39:56.916548  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.916554  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:56.916560  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:56.916620  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:56.941364  359214 cri.go:89] found id: ""
	I1213 10:39:56.941379  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.941391  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:56.941397  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:56.941458  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:56.965977  359214 cri.go:89] found id: ""
	I1213 10:39:56.965991  359214 logs.go:282] 0 containers: []
	W1213 10:39:56.965998  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:56.966006  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:56.966017  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:57.022046  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:57.022066  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:57.038754  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:57.038773  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:57.104023  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:57.095403   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.096172   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.097756   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.098390   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.100006   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:57.095403   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.096172   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.097756   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.098390   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.100006   13836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:57.104033  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:39:57.104043  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:39:57.164889  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:39:57.164909  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:59.697427  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:59.709225  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:59.709293  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:59.736814  359214 cri.go:89] found id: ""
	I1213 10:39:59.736828  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.736835  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:59.736840  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:39:59.736897  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:59.765228  359214 cri.go:89] found id: ""
	I1213 10:39:59.765243  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.765250  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:39:59.765255  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:39:59.765321  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:59.790792  359214 cri.go:89] found id: ""
	I1213 10:39:59.790807  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.790814  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:39:59.790819  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:59.790877  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:59.817123  359214 cri.go:89] found id: ""
	I1213 10:39:59.817137  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.817149  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:59.817161  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:59.817225  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:59.842465  359214 cri.go:89] found id: ""
	I1213 10:39:59.842480  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.842488  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:59.842493  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:59.842557  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:59.871828  359214 cri.go:89] found id: ""
	I1213 10:39:59.871842  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.871859  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:59.871865  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:59.871921  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:59.895975  359214 cri.go:89] found id: ""
	I1213 10:39:59.895989  359214 logs.go:282] 0 containers: []
	W1213 10:39:59.895996  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:59.896004  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:59.896014  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:59.953038  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:59.953058  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:59.970121  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:59.970140  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:00.112897  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:00.082810   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.086414   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.089161   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.089674   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.099187   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:00.082810   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.086414   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.089161   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.089674   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.099187   13940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:00.112910  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:00.112922  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:00.251770  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:00.251795  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:02.813529  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:02.825083  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:02.825143  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:02.849893  359214 cri.go:89] found id: ""
	I1213 10:40:02.849907  359214 logs.go:282] 0 containers: []
	W1213 10:40:02.849915  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:02.849920  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:02.849979  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:02.876288  359214 cri.go:89] found id: ""
	I1213 10:40:02.876303  359214 logs.go:282] 0 containers: []
	W1213 10:40:02.876311  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:02.876316  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:02.876376  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:02.900996  359214 cri.go:89] found id: ""
	I1213 10:40:02.901011  359214 logs.go:282] 0 containers: []
	W1213 10:40:02.901018  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:02.901023  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:02.901085  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:02.941121  359214 cri.go:89] found id: ""
	I1213 10:40:02.941135  359214 logs.go:282] 0 containers: []
	W1213 10:40:02.941142  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:02.941148  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:02.941212  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:02.977122  359214 cri.go:89] found id: ""
	I1213 10:40:02.977137  359214 logs.go:282] 0 containers: []
	W1213 10:40:02.977145  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:02.977151  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:02.977211  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:03.007614  359214 cri.go:89] found id: ""
	I1213 10:40:03.007631  359214 logs.go:282] 0 containers: []
	W1213 10:40:03.007638  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:03.007644  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:03.007712  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:03.035112  359214 cri.go:89] found id: ""
	I1213 10:40:03.035128  359214 logs.go:282] 0 containers: []
	W1213 10:40:03.035135  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:03.035143  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:03.035153  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:03.092346  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:03.092365  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:03.109513  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:03.109531  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:03.178080  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:03.169681   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.170216   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.171843   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.172389   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.174013   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:03.169681   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.170216   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.171843   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.172389   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.174013   14049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:03.178092  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:03.178103  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:03.240824  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:03.240843  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:05.775438  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:05.785647  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:05.785707  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:05.809484  359214 cri.go:89] found id: ""
	I1213 10:40:05.809497  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.809505  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:05.809510  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:05.809569  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:05.834754  359214 cri.go:89] found id: ""
	I1213 10:40:05.834769  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.834777  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:05.834782  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:05.834844  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:05.858984  359214 cri.go:89] found id: ""
	I1213 10:40:05.858999  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.859006  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:05.859011  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:05.859072  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:05.884414  359214 cri.go:89] found id: ""
	I1213 10:40:05.884429  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.884436  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:05.884442  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:05.884504  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:05.918776  359214 cri.go:89] found id: ""
	I1213 10:40:05.918799  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.918807  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:05.918812  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:05.918880  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:05.963307  359214 cri.go:89] found id: ""
	I1213 10:40:05.963331  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.963340  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:05.963346  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:05.963414  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:05.989236  359214 cri.go:89] found id: ""
	I1213 10:40:05.989252  359214 logs.go:282] 0 containers: []
	W1213 10:40:05.989260  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:05.989274  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:05.989284  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:06.046789  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:06.046809  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:06.063391  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:06.063408  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:06.133569  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:06.125185   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.125769   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.127395   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.128000   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.129671   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:06.125185   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.125769   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.127395   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.128000   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.129671   14153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:06.133579  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:06.133590  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:06.199358  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:06.199385  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:08.731038  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:08.741608  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:08.741668  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:08.770775  359214 cri.go:89] found id: ""
	I1213 10:40:08.770798  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.770806  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:08.770812  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:08.770880  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:08.795812  359214 cri.go:89] found id: ""
	I1213 10:40:08.795826  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.795834  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:08.795839  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:08.795900  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:08.821389  359214 cri.go:89] found id: ""
	I1213 10:40:08.821405  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.821415  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:08.821420  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:08.821484  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:08.847242  359214 cri.go:89] found id: ""
	I1213 10:40:08.847256  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.847265  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:08.847271  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:08.847337  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:08.873913  359214 cri.go:89] found id: ""
	I1213 10:40:08.873927  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.873935  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:08.873940  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:08.874003  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:08.898969  359214 cri.go:89] found id: ""
	I1213 10:40:08.898983  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.898990  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:08.898997  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:08.899063  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:08.936984  359214 cri.go:89] found id: ""
	I1213 10:40:08.936999  359214 logs.go:282] 0 containers: []
	W1213 10:40:08.937006  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:08.937015  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:08.937026  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:09.003459  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:09.003483  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:09.022648  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:09.022673  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:09.089911  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:09.081728   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.082500   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.083990   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.084516   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.086022   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:09.081728   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.082500   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.083990   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.084516   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.086022   14254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:09.089922  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:09.089934  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:09.152235  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:09.152255  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:11.681167  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:11.691399  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:11.691463  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:11.720896  359214 cri.go:89] found id: ""
	I1213 10:40:11.720910  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.720918  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:11.720924  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:11.720987  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:11.746089  359214 cri.go:89] found id: ""
	I1213 10:40:11.746103  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.746111  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:11.746117  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:11.746176  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:11.770642  359214 cri.go:89] found id: ""
	I1213 10:40:11.770657  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.770664  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:11.770670  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:11.770759  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:11.798877  359214 cri.go:89] found id: ""
	I1213 10:40:11.798891  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.798900  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:11.798905  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:11.798965  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:11.824512  359214 cri.go:89] found id: ""
	I1213 10:40:11.824526  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.824534  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:11.824539  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:11.824596  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:11.849644  359214 cri.go:89] found id: ""
	I1213 10:40:11.849658  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.849665  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:11.849671  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:11.849728  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:11.878171  359214 cri.go:89] found id: ""
	I1213 10:40:11.878185  359214 logs.go:282] 0 containers: []
	W1213 10:40:11.878192  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:11.878201  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:11.878213  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:11.942012  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:11.942033  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:11.973830  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:11.973849  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:12.038115  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:12.038135  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:12.055328  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:12.055345  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:12.122312  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:12.113825   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.114885   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.116494   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.116834   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.118378   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:12.113825   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.114885   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.116494   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.116834   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:12.118378   14374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:14.622545  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:14.632872  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:14.632931  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:14.660285  359214 cri.go:89] found id: ""
	I1213 10:40:14.660300  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.660308  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:14.660313  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:14.660370  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:14.686341  359214 cri.go:89] found id: ""
	I1213 10:40:14.686355  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.686362  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:14.686368  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:14.686427  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:14.710306  359214 cri.go:89] found id: ""
	I1213 10:40:14.710321  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.710328  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:14.710334  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:14.710392  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:14.736823  359214 cri.go:89] found id: ""
	I1213 10:40:14.736838  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.736846  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:14.736851  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:14.736909  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:14.761623  359214 cri.go:89] found id: ""
	I1213 10:40:14.761638  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.761645  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:14.761651  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:14.761710  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:14.786707  359214 cri.go:89] found id: ""
	I1213 10:40:14.786721  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.786729  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:14.786734  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:14.786795  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:14.816346  359214 cri.go:89] found id: ""
	I1213 10:40:14.816361  359214 logs.go:282] 0 containers: []
	W1213 10:40:14.816368  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:14.816376  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:14.816386  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:14.877767  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:14.877786  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:14.914260  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:14.914277  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:14.980282  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:14.980303  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:14.996741  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:14.996760  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:15.099242  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:15.090567   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.091275   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.092910   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.093493   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.095098   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:15.090567   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.091275   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.092910   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.093493   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:15.095098   14473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:17.600882  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:17.611377  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:17.611437  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:17.639825  359214 cri.go:89] found id: ""
	I1213 10:40:17.639840  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.639847  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:17.639853  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:17.639912  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:17.664963  359214 cri.go:89] found id: ""
	I1213 10:40:17.664977  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.664985  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:17.664990  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:17.665052  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:17.690137  359214 cri.go:89] found id: ""
	I1213 10:40:17.690152  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.690159  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:17.690165  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:17.690230  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:17.715292  359214 cri.go:89] found id: ""
	I1213 10:40:17.715307  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.715315  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:17.715320  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:17.715382  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:17.744729  359214 cri.go:89] found id: ""
	I1213 10:40:17.744743  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.744750  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:17.744756  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:17.744815  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:17.772253  359214 cri.go:89] found id: ""
	I1213 10:40:17.772268  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.772276  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:17.772282  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:17.772348  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:17.797214  359214 cri.go:89] found id: ""
	I1213 10:40:17.797229  359214 logs.go:282] 0 containers: []
	W1213 10:40:17.797237  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:17.797245  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:17.797255  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:17.852633  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:17.852653  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:17.869612  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:17.869633  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:17.936787  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:17.927568   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.928465   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.930186   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.930475   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.932615   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:17.927568   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.928465   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.930186   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.930475   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:17.932615   14559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:17.936804  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:17.936815  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:18.005630  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:18.005656  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:20.537348  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:20.547703  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:20.547778  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:20.572977  359214 cri.go:89] found id: ""
	I1213 10:40:20.572991  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.572998  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:20.573004  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:20.573062  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:20.602314  359214 cri.go:89] found id: ""
	I1213 10:40:20.602328  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.602335  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:20.602341  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:20.602397  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:20.627655  359214 cri.go:89] found id: ""
	I1213 10:40:20.627669  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.627686  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:20.627698  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:20.627767  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:20.655199  359214 cri.go:89] found id: ""
	I1213 10:40:20.655213  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.655220  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:20.655226  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:20.655291  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:20.682083  359214 cri.go:89] found id: ""
	I1213 10:40:20.682107  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.682115  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:20.682120  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:20.682189  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:20.707128  359214 cri.go:89] found id: ""
	I1213 10:40:20.707142  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.707150  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:20.707155  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:20.707213  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:20.732071  359214 cri.go:89] found id: ""
	I1213 10:40:20.732087  359214 logs.go:282] 0 containers: []
	W1213 10:40:20.732094  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:20.732103  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:20.732112  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:20.797387  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:20.788274   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.789028   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.791053   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.791612   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.793250   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:20.788274   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.789028   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.791053   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.791612   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:20.793250   14658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:20.797397  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:20.797410  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:20.859451  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:20.859471  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:20.892801  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:20.892820  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:20.958351  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:20.958371  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:23.480839  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:23.491926  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:23.491987  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:23.518294  359214 cri.go:89] found id: ""
	I1213 10:40:23.518309  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.518317  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:23.518324  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:23.518385  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:23.545487  359214 cri.go:89] found id: ""
	I1213 10:40:23.545502  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.545509  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:23.545514  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:23.545584  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:23.571990  359214 cri.go:89] found id: ""
	I1213 10:40:23.572004  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.572012  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:23.572017  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:23.572080  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:23.599133  359214 cri.go:89] found id: ""
	I1213 10:40:23.599149  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.599157  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:23.599163  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:23.599223  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:23.626203  359214 cri.go:89] found id: ""
	I1213 10:40:23.626217  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.626225  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:23.626232  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:23.626296  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:23.653325  359214 cri.go:89] found id: ""
	I1213 10:40:23.653341  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.653349  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:23.653354  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:23.653423  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:23.688100  359214 cri.go:89] found id: ""
	I1213 10:40:23.688115  359214 logs.go:282] 0 containers: []
	W1213 10:40:23.688123  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:23.688132  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:23.688141  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:23.750798  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:23.750818  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:23.781668  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:23.781685  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:23.839211  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:23.839231  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:23.856390  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:23.856414  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:23.924021  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:23.914017   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.914911   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.915850   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.917610   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.918368   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:23.914017   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.914911   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.915850   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.917610   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:23.918368   14781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:26.424278  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:26.434304  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:26.434366  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:26.460634  359214 cri.go:89] found id: ""
	I1213 10:40:26.460649  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.460657  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:26.460663  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:26.460723  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:26.485153  359214 cri.go:89] found id: ""
	I1213 10:40:26.485167  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.485175  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:26.485180  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:26.485238  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:26.514602  359214 cri.go:89] found id: ""
	I1213 10:40:26.514617  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.514624  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:26.514630  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:26.514715  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:26.539399  359214 cri.go:89] found id: ""
	I1213 10:40:26.539415  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.539422  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:26.539427  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:26.539489  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:26.564066  359214 cri.go:89] found id: ""
	I1213 10:40:26.564081  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.564088  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:26.564094  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:26.564158  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:26.595722  359214 cri.go:89] found id: ""
	I1213 10:40:26.595736  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.595744  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:26.595749  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:26.595808  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:26.621852  359214 cri.go:89] found id: ""
	I1213 10:40:26.621867  359214 logs.go:282] 0 containers: []
	W1213 10:40:26.621875  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:26.621884  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:26.621894  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:26.678226  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:26.678245  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:26.694679  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:26.694762  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:26.760593  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:26.751702   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.752418   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.754240   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.754904   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.756624   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:26.751702   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.752418   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.754240   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.754904   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:26.756624   14872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:26.760604  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:26.760615  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:26.826139  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:26.826161  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:29.354247  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:29.364778  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:29.364838  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:29.391976  359214 cri.go:89] found id: ""
	I1213 10:40:29.391992  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.391999  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:29.392006  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:29.392065  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:29.420898  359214 cri.go:89] found id: ""
	I1213 10:40:29.420913  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.420920  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:29.420926  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:29.420995  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:29.445579  359214 cri.go:89] found id: ""
	I1213 10:40:29.445593  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.445601  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:29.445606  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:29.445669  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:29.470481  359214 cri.go:89] found id: ""
	I1213 10:40:29.470496  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.470504  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:29.470510  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:29.470571  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:29.494582  359214 cri.go:89] found id: ""
	I1213 10:40:29.494597  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.494605  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:29.494612  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:29.494672  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:29.520784  359214 cri.go:89] found id: ""
	I1213 10:40:29.520801  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.520810  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:29.520816  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:29.520879  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:29.546369  359214 cri.go:89] found id: ""
	I1213 10:40:29.546383  359214 logs.go:282] 0 containers: []
	W1213 10:40:29.546390  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:29.546398  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:29.546410  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:29.607363  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:29.607383  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:29.641550  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:29.641568  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:29.700639  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:29.700662  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:29.717135  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:29.717152  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:29.786035  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:29.777828   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.778659   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.780297   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.780629   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.782173   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:29.777828   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.778659   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.780297   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.780629   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:29.782173   14992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:32.286874  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:32.297433  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:32.297493  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:32.326086  359214 cri.go:89] found id: ""
	I1213 10:40:32.326102  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.326109  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:32.326116  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:32.326172  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:32.359076  359214 cri.go:89] found id: ""
	I1213 10:40:32.359091  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.359098  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:32.359104  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:32.359170  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:32.384522  359214 cri.go:89] found id: ""
	I1213 10:40:32.384536  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.384544  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:32.384560  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:32.384659  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:32.410250  359214 cri.go:89] found id: ""
	I1213 10:40:32.410264  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.410272  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:32.410285  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:32.410348  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:32.435630  359214 cri.go:89] found id: ""
	I1213 10:40:32.435644  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.435651  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:32.435656  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:32.435714  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:32.463149  359214 cri.go:89] found id: ""
	I1213 10:40:32.463163  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.463171  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:32.463176  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:32.463242  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:32.487678  359214 cri.go:89] found id: ""
	I1213 10:40:32.487692  359214 logs.go:282] 0 containers: []
	W1213 10:40:32.487700  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:32.487707  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:32.487716  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:32.550022  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:32.550044  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:32.583548  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:32.583564  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:32.640719  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:32.640741  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:32.658578  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:32.658596  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:32.723797  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:32.714586   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.715311   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.716834   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.717289   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.719662   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:32.714586   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.715311   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.716834   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.717289   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:32.719662   15093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:35.224914  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:35.236872  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:35.237012  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:35.268051  359214 cri.go:89] found id: ""
	I1213 10:40:35.268066  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.268073  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:35.268080  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:35.268145  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:35.295044  359214 cri.go:89] found id: ""
	I1213 10:40:35.295059  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.295068  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:35.295075  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:35.295135  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:35.325621  359214 cri.go:89] found id: ""
	I1213 10:40:35.325634  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.325642  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:35.325647  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:35.325710  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:35.351145  359214 cri.go:89] found id: ""
	I1213 10:40:35.351160  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.351168  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:35.351173  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:35.351232  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:35.376062  359214 cri.go:89] found id: ""
	I1213 10:40:35.376076  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.376083  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:35.376089  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:35.376145  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:35.400598  359214 cri.go:89] found id: ""
	I1213 10:40:35.400612  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.400619  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:35.400631  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:35.400688  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:35.425347  359214 cri.go:89] found id: ""
	I1213 10:40:35.425361  359214 logs.go:282] 0 containers: []
	W1213 10:40:35.425368  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:35.425376  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:35.425387  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:35.487139  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:35.487160  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:35.514527  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:35.514544  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:35.571469  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:35.571489  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:35.590017  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:35.590034  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:35.658284  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:35.648682   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.650020   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.650936   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.652639   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.653357   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:35.648682   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.650020   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.650936   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.652639   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:35.653357   15200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:38.158809  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:38.173580  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:38.173664  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:38.205099  359214 cri.go:89] found id: ""
	I1213 10:40:38.205115  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.205122  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:38.205128  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:38.205185  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:38.230418  359214 cri.go:89] found id: ""
	I1213 10:40:38.230432  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.230439  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:38.230445  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:38.230503  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:38.255657  359214 cri.go:89] found id: ""
	I1213 10:40:38.255671  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.255679  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:38.255684  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:38.255743  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:38.284257  359214 cri.go:89] found id: ""
	I1213 10:40:38.284271  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.284279  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:38.284285  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:38.284343  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:38.310187  359214 cri.go:89] found id: ""
	I1213 10:40:38.310202  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.310209  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:38.310214  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:38.310272  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:38.334855  359214 cri.go:89] found id: ""
	I1213 10:40:38.334870  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.334878  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:38.334883  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:38.334943  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:38.364073  359214 cri.go:89] found id: ""
	I1213 10:40:38.364087  359214 logs.go:282] 0 containers: []
	W1213 10:40:38.364095  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:38.364103  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:38.364114  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:38.380615  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:38.380633  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:38.445151  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:38.436629   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.437359   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.439007   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.439526   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.441205   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:38.436629   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.437359   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.439007   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.439526   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:38.441205   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:38.445161  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:38.445171  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:38.508000  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:38.508024  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:38.536010  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:38.536028  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:41.097145  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:41.107492  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:41.107560  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:41.133151  359214 cri.go:89] found id: ""
	I1213 10:40:41.133165  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.133173  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:41.133178  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:41.133239  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:41.158807  359214 cri.go:89] found id: ""
	I1213 10:40:41.158822  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.158830  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:41.158835  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:41.158900  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:41.186344  359214 cri.go:89] found id: ""
	I1213 10:40:41.186358  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.186366  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:41.186371  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:41.186432  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:41.212889  359214 cri.go:89] found id: ""
	I1213 10:40:41.212904  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.212911  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:41.212917  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:41.212976  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:41.238414  359214 cri.go:89] found id: ""
	I1213 10:40:41.238429  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.238437  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:41.238442  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:41.238509  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:41.265200  359214 cri.go:89] found id: ""
	I1213 10:40:41.265215  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.265222  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:41.265228  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:41.265299  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:41.293447  359214 cri.go:89] found id: ""
	I1213 10:40:41.293465  359214 logs.go:282] 0 containers: []
	W1213 10:40:41.293473  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:41.293483  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:41.293539  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:41.357277  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:41.348095   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.348933   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.350453   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.350904   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.352722   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:41.348095   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.348933   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.350453   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.350904   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:41.352722   15390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:41.357289  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:41.357299  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:41.419746  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:41.419767  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:41.447382  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:41.447400  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:41.502410  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:41.502430  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:44.019462  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:44.030131  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:44.030195  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:44.063076  359214 cri.go:89] found id: ""
	I1213 10:40:44.063093  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.063102  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:44.063107  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:44.063171  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:44.087990  359214 cri.go:89] found id: ""
	I1213 10:40:44.088005  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.088012  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:44.088017  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:44.088077  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:44.116967  359214 cri.go:89] found id: ""
	I1213 10:40:44.116982  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.117000  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:44.117006  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:44.117075  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:44.144381  359214 cri.go:89] found id: ""
	I1213 10:40:44.144395  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.144403  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:44.144414  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:44.144475  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:44.176265  359214 cri.go:89] found id: ""
	I1213 10:40:44.176279  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.176286  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:44.176291  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:44.176349  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:44.204075  359214 cri.go:89] found id: ""
	I1213 10:40:44.204090  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.204097  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:44.204102  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:44.204159  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:44.235147  359214 cri.go:89] found id: ""
	I1213 10:40:44.235161  359214 logs.go:282] 0 containers: []
	W1213 10:40:44.235169  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:44.235177  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:44.235187  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:44.290923  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:44.290942  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:44.307381  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:44.307398  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:44.371069  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:44.362628   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.363314   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.365045   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.365643   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.367260   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:44.362628   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.363314   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.365045   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.365643   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:44.367260   15505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:44.371080  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:44.371092  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:44.432736  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:44.432757  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:46.966048  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:46.976554  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:46.976616  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:47.009823  359214 cri.go:89] found id: ""
	I1213 10:40:47.009837  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.009845  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:47.009850  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:47.009912  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:47.035213  359214 cri.go:89] found id: ""
	I1213 10:40:47.035227  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.035234  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:47.035239  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:47.035300  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:47.060442  359214 cri.go:89] found id: ""
	I1213 10:40:47.060457  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.060465  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:47.060470  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:47.060527  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:47.084361  359214 cri.go:89] found id: ""
	I1213 10:40:47.084375  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.084383  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:47.084389  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:47.084453  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:47.109828  359214 cri.go:89] found id: ""
	I1213 10:40:47.109843  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.109850  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:47.109856  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:47.109920  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:47.138538  359214 cri.go:89] found id: ""
	I1213 10:40:47.138553  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.138561  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:47.138566  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:47.138623  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:47.173086  359214 cri.go:89] found id: ""
	I1213 10:40:47.173101  359214 logs.go:282] 0 containers: []
	W1213 10:40:47.173108  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:47.173116  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:47.173125  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:47.230267  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:47.230285  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:47.247567  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:47.247584  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:47.313118  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:47.305055   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.305868   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.307513   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.307952   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.309445   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:47.305055   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.305868   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.307513   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.307952   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:47.309445   15610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:47.313128  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:47.313140  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:47.379486  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:47.379507  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:49.911610  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:49.921678  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:49.921738  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:49.945802  359214 cri.go:89] found id: ""
	I1213 10:40:49.945815  359214 logs.go:282] 0 containers: []
	W1213 10:40:49.945823  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:49.945828  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:49.945884  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:49.972021  359214 cri.go:89] found id: ""
	I1213 10:40:49.972036  359214 logs.go:282] 0 containers: []
	W1213 10:40:49.972043  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:49.972048  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:49.972104  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:49.995832  359214 cri.go:89] found id: ""
	I1213 10:40:49.995847  359214 logs.go:282] 0 containers: []
	W1213 10:40:49.995854  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:49.995859  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:49.995917  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:50.025400  359214 cri.go:89] found id: ""
	I1213 10:40:50.025416  359214 logs.go:282] 0 containers: []
	W1213 10:40:50.025424  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:50.025430  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:50.025488  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:50.052197  359214 cri.go:89] found id: ""
	I1213 10:40:50.052213  359214 logs.go:282] 0 containers: []
	W1213 10:40:50.052222  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:50.052229  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:50.052290  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:50.079760  359214 cri.go:89] found id: ""
	I1213 10:40:50.079774  359214 logs.go:282] 0 containers: []
	W1213 10:40:50.079782  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:50.079788  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:50.079849  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:50.109349  359214 cri.go:89] found id: ""
	I1213 10:40:50.109364  359214 logs.go:282] 0 containers: []
	W1213 10:40:50.109372  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:50.109380  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:50.109390  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:50.165908  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:50.165929  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:50.184199  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:50.184216  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:50.252767  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:50.244722   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.245526   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.247105   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.247464   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.248997   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:50.244722   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.245526   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.247105   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.247464   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:50.248997   15716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:50.252777  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:50.252790  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:50.314222  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:50.314241  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:52.842532  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:52.853108  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:52.853184  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:52.880391  359214 cri.go:89] found id: ""
	I1213 10:40:52.880412  359214 logs.go:282] 0 containers: []
	W1213 10:40:52.880420  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:52.880426  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:52.880487  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:52.905175  359214 cri.go:89] found id: ""
	I1213 10:40:52.905189  359214 logs.go:282] 0 containers: []
	W1213 10:40:52.905197  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:52.905202  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:52.905279  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:52.934872  359214 cri.go:89] found id: ""
	I1213 10:40:52.934887  359214 logs.go:282] 0 containers: []
	W1213 10:40:52.934894  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:52.934900  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:52.934956  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:52.960307  359214 cri.go:89] found id: ""
	I1213 10:40:52.960321  359214 logs.go:282] 0 containers: []
	W1213 10:40:52.960329  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:52.960334  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:52.960390  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:52.985363  359214 cri.go:89] found id: ""
	I1213 10:40:52.985377  359214 logs.go:282] 0 containers: []
	W1213 10:40:52.985385  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:52.985390  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:52.985453  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:53.011565  359214 cri.go:89] found id: ""
	I1213 10:40:53.011581  359214 logs.go:282] 0 containers: []
	W1213 10:40:53.011589  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:53.011594  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:53.011657  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:53.036397  359214 cri.go:89] found id: ""
	I1213 10:40:53.036412  359214 logs.go:282] 0 containers: []
	W1213 10:40:53.036420  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:53.036428  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:53.036438  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:53.091583  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:53.091603  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:53.107990  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:53.108007  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:53.173876  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:53.164848   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.165601   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.167336   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.167976   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.169634   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:53.164848   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.165601   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.167336   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.167976   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:53.169634   15816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:53.173886  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:53.173897  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:53.238989  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:53.239009  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:55.773075  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:55.783512  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:55.783574  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:55.807988  359214 cri.go:89] found id: ""
	I1213 10:40:55.808002  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.808009  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:55.808014  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:55.808073  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:55.831609  359214 cri.go:89] found id: ""
	I1213 10:40:55.831624  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.831632  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:55.831637  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:55.831696  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:55.856162  359214 cri.go:89] found id: ""
	I1213 10:40:55.856177  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.856184  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:55.856190  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:55.856247  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:55.883604  359214 cri.go:89] found id: ""
	I1213 10:40:55.883619  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.883626  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:55.883631  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:55.883695  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:55.907679  359214 cri.go:89] found id: ""
	I1213 10:40:55.907694  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.907701  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:55.907706  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:55.907764  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:55.932970  359214 cri.go:89] found id: ""
	I1213 10:40:55.932984  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.932991  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:55.932996  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:55.933057  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:55.956837  359214 cri.go:89] found id: ""
	I1213 10:40:55.956851  359214 logs.go:282] 0 containers: []
	W1213 10:40:55.956858  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:55.956866  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:55.956877  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:56.030354  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:56.021163   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.021989   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.023979   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.024615   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.026271   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:56.021163   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.021989   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.023979   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.024615   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:56.026271   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:56.030364  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:56.030376  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:56.092205  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:56.092226  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:56.119616  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:56.119633  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:56.177084  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:56.177103  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:58.695794  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:58.706025  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:58.706086  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:58.729634  359214 cri.go:89] found id: ""
	I1213 10:40:58.729647  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.729654  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:58.729659  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:40:58.729718  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:58.753786  359214 cri.go:89] found id: ""
	I1213 10:40:58.753800  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.753808  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:40:58.753813  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:40:58.753874  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:58.778478  359214 cri.go:89] found id: ""
	I1213 10:40:58.778491  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.778498  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:40:58.778503  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:58.778560  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:58.803243  359214 cri.go:89] found id: ""
	I1213 10:40:58.803258  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.803274  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:58.803280  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:58.803342  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:58.827435  359214 cri.go:89] found id: ""
	I1213 10:40:58.827449  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.827457  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:58.827462  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:58.827526  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:58.852612  359214 cri.go:89] found id: ""
	I1213 10:40:58.852627  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.852635  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:58.852640  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:58.852702  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:58.879181  359214 cri.go:89] found id: ""
	I1213 10:40:58.879195  359214 logs.go:282] 0 containers: []
	W1213 10:40:58.879202  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:58.879210  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:40:58.879224  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:40:58.940146  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:40:58.940166  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:58.969086  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:58.969104  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:59.027812  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:59.027832  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:59.044161  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:59.044180  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:59.107958  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:59.099940   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.100731   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.102281   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.102588   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.104070   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:59.099940   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.100731   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.102281   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.102588   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:59.104070   16038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:01.608222  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:01.619072  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:01.619137  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:01.644559  359214 cri.go:89] found id: ""
	I1213 10:41:01.644574  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.644582  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:01.644587  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:01.644690  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:01.673686  359214 cri.go:89] found id: ""
	I1213 10:41:01.673701  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.673709  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:01.673714  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:01.673776  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:01.700231  359214 cri.go:89] found id: ""
	I1213 10:41:01.700246  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.700253  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:01.700259  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:01.700317  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:01.729867  359214 cri.go:89] found id: ""
	I1213 10:41:01.729883  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.729890  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:01.729895  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:01.729954  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:01.754275  359214 cri.go:89] found id: ""
	I1213 10:41:01.754289  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.754297  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:01.754302  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:01.754362  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:01.780449  359214 cri.go:89] found id: ""
	I1213 10:41:01.780464  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.780472  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:01.780477  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:01.780533  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:01.806614  359214 cri.go:89] found id: ""
	I1213 10:41:01.806638  359214 logs.go:282] 0 containers: []
	W1213 10:41:01.806646  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:01.806654  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:01.806666  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:01.872660  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:01.872681  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:01.908081  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:01.908099  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:01.965082  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:01.965103  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:01.982015  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:01.982033  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:02.054794  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:02.045518   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.046349   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.047002   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.048599   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.049133   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:02.045518   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.046349   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.047002   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.048599   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:02.049133   16143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:04.555147  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:04.565791  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:04.565856  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:04.591956  359214 cri.go:89] found id: ""
	I1213 10:41:04.591971  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.591978  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:04.591984  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:04.592045  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:04.615698  359214 cri.go:89] found id: ""
	I1213 10:41:04.615713  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.615720  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:04.615725  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:04.615786  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:04.640509  359214 cri.go:89] found id: ""
	I1213 10:41:04.640523  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.640531  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:04.640538  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:04.640596  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:04.665547  359214 cri.go:89] found id: ""
	I1213 10:41:04.665562  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.665569  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:04.665577  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:04.665637  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:04.690947  359214 cri.go:89] found id: ""
	I1213 10:41:04.690961  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.690969  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:04.690974  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:04.691037  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:04.720397  359214 cri.go:89] found id: ""
	I1213 10:41:04.720421  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.720429  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:04.720435  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:04.720492  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:04.750207  359214 cri.go:89] found id: ""
	I1213 10:41:04.750233  359214 logs.go:282] 0 containers: []
	W1213 10:41:04.750241  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:04.750250  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:04.750261  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:04.814350  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:04.806033   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.806630   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.808181   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.808726   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.810316   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:04.806033   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.806630   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.808181   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.808726   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:04.810316   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:04.814360  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:04.814381  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:04.876775  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:04.876798  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:04.904820  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:04.904836  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:04.962939  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:04.962958  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:07.479750  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:07.489681  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:07.489740  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:07.516670  359214 cri.go:89] found id: ""
	I1213 10:41:07.516684  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.516691  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:07.516697  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:07.516754  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:07.541873  359214 cri.go:89] found id: ""
	I1213 10:41:07.541888  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.541895  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:07.541900  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:07.541958  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:07.567390  359214 cri.go:89] found id: ""
	I1213 10:41:07.567404  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.567411  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:07.567416  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:07.567476  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:07.595533  359214 cri.go:89] found id: ""
	I1213 10:41:07.595546  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.595553  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:07.595559  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:07.595624  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:07.619449  359214 cri.go:89] found id: ""
	I1213 10:41:07.619463  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.619470  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:07.619476  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:07.619535  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:07.646270  359214 cri.go:89] found id: ""
	I1213 10:41:07.646284  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.646291  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:07.646297  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:07.646356  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:07.671609  359214 cri.go:89] found id: ""
	I1213 10:41:07.671623  359214 logs.go:282] 0 containers: []
	W1213 10:41:07.671630  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:07.671638  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:07.671648  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:07.726992  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:07.727010  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:07.743360  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:07.743377  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:07.805371  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:07.797570   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.797988   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.799538   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.799877   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.801379   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:07.797570   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.797988   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.799538   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.799877   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:07.801379   16341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:07.805381  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:07.805393  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:07.867093  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:07.867115  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:10.399083  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:10.409097  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:10.409158  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:10.444135  359214 cri.go:89] found id: ""
	I1213 10:41:10.444149  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.444157  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:10.444162  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:10.444224  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:10.476756  359214 cri.go:89] found id: ""
	I1213 10:41:10.476771  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.476778  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:10.476784  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:10.476842  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:10.501876  359214 cri.go:89] found id: ""
	I1213 10:41:10.501890  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.501898  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:10.501903  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:10.501962  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:10.526921  359214 cri.go:89] found id: ""
	I1213 10:41:10.526936  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.526943  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:10.526949  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:10.527008  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:10.560474  359214 cri.go:89] found id: ""
	I1213 10:41:10.560489  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.560496  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:10.560501  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:10.560560  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:10.589176  359214 cri.go:89] found id: ""
	I1213 10:41:10.589190  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.589209  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:10.589215  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:10.589301  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:10.614119  359214 cri.go:89] found id: ""
	I1213 10:41:10.614139  359214 logs.go:282] 0 containers: []
	W1213 10:41:10.614146  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:10.614155  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:10.614165  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:10.669835  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:10.669856  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:10.687547  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:10.687564  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:10.753151  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:10.744373   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.744993   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.747393   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.747860   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.749416   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:10.744373   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.744993   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.747393   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.747860   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:10.749416   16446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:10.753161  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:10.753175  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:10.825142  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:10.825173  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:13.352978  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:13.363579  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:13.363649  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:13.392544  359214 cri.go:89] found id: ""
	I1213 10:41:13.392558  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.392565  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:13.392571  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:13.392668  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:13.431393  359214 cri.go:89] found id: ""
	I1213 10:41:13.431407  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.431424  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:13.431430  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:13.431498  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:13.467012  359214 cri.go:89] found id: ""
	I1213 10:41:13.467027  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.467034  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:13.467040  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:13.467114  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:13.495958  359214 cri.go:89] found id: ""
	I1213 10:41:13.495972  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.495990  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:13.495996  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:13.496061  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:13.521376  359214 cri.go:89] found id: ""
	I1213 10:41:13.521399  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.521408  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:13.521413  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:13.521480  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:13.548831  359214 cri.go:89] found id: ""
	I1213 10:41:13.548845  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.548852  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:13.548858  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:13.548920  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:13.574611  359214 cri.go:89] found id: ""
	I1213 10:41:13.574626  359214 logs.go:282] 0 containers: []
	W1213 10:41:13.574633  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:13.574661  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:13.574673  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:13.631156  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:13.631175  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:13.647668  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:13.647685  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:13.712729  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:13.703922   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.704556   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.706153   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.706621   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.708279   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:13.703922   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.704556   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.706153   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.706621   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:13.708279   16550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:13.712740  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:13.712752  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:13.776779  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:13.776799  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:16.310332  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:16.320699  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:16.320761  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:16.344441  359214 cri.go:89] found id: ""
	I1213 10:41:16.344455  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.344462  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:16.344468  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:16.344529  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:16.372703  359214 cri.go:89] found id: ""
	I1213 10:41:16.372717  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.372725  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:16.372730  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:16.372789  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:16.397701  359214 cri.go:89] found id: ""
	I1213 10:41:16.397715  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.397723  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:16.397728  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:16.397785  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:16.436711  359214 cri.go:89] found id: ""
	I1213 10:41:16.436726  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.436733  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:16.436739  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:16.436795  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:16.471220  359214 cri.go:89] found id: ""
	I1213 10:41:16.471235  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.471243  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:16.471248  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:16.471306  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:16.498773  359214 cri.go:89] found id: ""
	I1213 10:41:16.498788  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.498796  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:16.498801  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:16.498861  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:16.523734  359214 cri.go:89] found id: ""
	I1213 10:41:16.523749  359214 logs.go:282] 0 containers: []
	W1213 10:41:16.523756  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:16.523764  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:16.523775  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:16.554346  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:16.554364  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:16.610645  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:16.610665  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:16.626953  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:16.626970  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:16.691344  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:16.682639   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.683311   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.685086   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.685793   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.687420   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:16.682639   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.683311   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.685086   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.685793   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:16.687420   16666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:16.691354  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:16.691367  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:19.255129  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:19.265879  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:19.265940  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:19.291837  359214 cri.go:89] found id: ""
	I1213 10:41:19.291851  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.291859  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:19.291864  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:19.291923  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:19.315964  359214 cri.go:89] found id: ""
	I1213 10:41:19.315978  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.315985  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:19.315990  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:19.316046  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:19.343352  359214 cri.go:89] found id: ""
	I1213 10:41:19.343366  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.343373  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:19.343378  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:19.343434  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:19.367745  359214 cri.go:89] found id: ""
	I1213 10:41:19.367760  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.367767  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:19.367773  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:19.367830  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:19.391416  359214 cri.go:89] found id: ""
	I1213 10:41:19.391429  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.391437  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:19.391442  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:19.391503  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:19.420969  359214 cri.go:89] found id: ""
	I1213 10:41:19.420982  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.420989  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:19.420995  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:19.421051  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:19.459512  359214 cri.go:89] found id: ""
	I1213 10:41:19.459528  359214 logs.go:282] 0 containers: []
	W1213 10:41:19.459536  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:19.459544  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:19.459555  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:19.490208  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:19.490224  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:19.546240  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:19.546261  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:19.562645  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:19.562664  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:19.625588  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:19.617541   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.617927   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.619446   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.619795   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.621463   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:19.617541   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.617927   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.619446   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.619795   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:19.621463   16770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:19.625599  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:19.625610  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:22.187966  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:22.198583  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:22.198650  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:22.223213  359214 cri.go:89] found id: ""
	I1213 10:41:22.223227  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.223240  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:22.223246  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:22.223303  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:22.248552  359214 cri.go:89] found id: ""
	I1213 10:41:22.248567  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.248574  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:22.248579  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:22.248641  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:22.273682  359214 cri.go:89] found id: ""
	I1213 10:41:22.273697  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.273714  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:22.273720  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:22.273802  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:22.299868  359214 cri.go:89] found id: ""
	I1213 10:41:22.299883  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.299891  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:22.299896  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:22.299962  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:22.325309  359214 cri.go:89] found id: ""
	I1213 10:41:22.325324  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.325331  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:22.325337  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:22.325399  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:22.354179  359214 cri.go:89] found id: ""
	I1213 10:41:22.354193  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.354200  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:22.354205  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:22.354261  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:22.378958  359214 cri.go:89] found id: ""
	I1213 10:41:22.378980  359214 logs.go:282] 0 containers: []
	W1213 10:41:22.378987  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:22.378997  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:22.379007  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:22.440927  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:22.440949  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:22.460102  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:22.460120  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:22.529575  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:22.521290   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.521799   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.523477   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.524007   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.525558   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:22.521290   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.521799   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.523477   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.524007   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:22.525558   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:22.529585  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:22.529595  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:22.592904  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:22.592925  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:25.122090  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:25.132657  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:25.132721  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:25.159021  359214 cri.go:89] found id: ""
	I1213 10:41:25.159036  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.159044  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:25.159049  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:25.159111  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:25.185666  359214 cri.go:89] found id: ""
	I1213 10:41:25.185691  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.185700  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:25.185706  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:25.185787  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:25.211201  359214 cri.go:89] found id: ""
	I1213 10:41:25.211216  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.211223  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:25.211228  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:25.211288  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:25.241164  359214 cri.go:89] found id: ""
	I1213 10:41:25.241178  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.241185  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:25.241191  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:25.241259  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:25.266721  359214 cri.go:89] found id: ""
	I1213 10:41:25.266737  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.266745  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:25.266751  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:25.266815  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:25.292241  359214 cri.go:89] found id: ""
	I1213 10:41:25.292255  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.292263  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:25.292272  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:25.292332  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:25.317411  359214 cri.go:89] found id: ""
	I1213 10:41:25.317441  359214 logs.go:282] 0 containers: []
	W1213 10:41:25.317450  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:25.317458  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:25.317469  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:25.373328  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:25.373348  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:25.390032  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:25.390057  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:25.483290  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:25.471186   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.471638   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.473963   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.474270   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.475696   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:25.471186   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.471638   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.473963   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.474270   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:25.475696   16962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:25.483300  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:25.483311  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:25.544908  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:25.544930  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:28.078163  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:28.091034  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:28.091099  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:28.115911  359214 cri.go:89] found id: ""
	I1213 10:41:28.115925  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.115934  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:28.115940  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:28.116004  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:28.139316  359214 cri.go:89] found id: ""
	I1213 10:41:28.139330  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.139338  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:28.139343  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:28.139399  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:28.164405  359214 cri.go:89] found id: ""
	I1213 10:41:28.164420  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.164427  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:28.164434  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:28.164494  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:28.193103  359214 cri.go:89] found id: ""
	I1213 10:41:28.193117  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.193130  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:28.193136  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:28.193191  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:28.218193  359214 cri.go:89] found id: ""
	I1213 10:41:28.218207  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.218214  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:28.218219  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:28.218277  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:28.246727  359214 cri.go:89] found id: ""
	I1213 10:41:28.246741  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.246748  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:28.246754  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:28.246828  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:28.272720  359214 cri.go:89] found id: ""
	I1213 10:41:28.272735  359214 logs.go:282] 0 containers: []
	W1213 10:41:28.272753  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:28.272761  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:28.272771  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:28.329731  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:28.329751  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:28.345935  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:28.345953  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:28.409004  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:28.400511   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.401329   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.403117   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.403657   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.404653   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:28.400511   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.401329   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.403117   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.403657   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:28.404653   17066 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:28.409014  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:28.409024  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:28.475582  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:28.475603  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:31.008193  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:31.019100  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:41:31.019165  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:41:31.043886  359214 cri.go:89] found id: ""
	I1213 10:41:31.043907  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.043915  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:41:31.043921  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:41:31.043987  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:41:31.069993  359214 cri.go:89] found id: ""
	I1213 10:41:31.070008  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.070016  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:41:31.070022  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:41:31.070089  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:41:31.098048  359214 cri.go:89] found id: ""
	I1213 10:41:31.098075  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.098083  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:41:31.098089  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:41:31.098161  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:41:31.123592  359214 cri.go:89] found id: ""
	I1213 10:41:31.123608  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.123616  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:41:31.123621  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:41:31.123686  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:41:31.151147  359214 cri.go:89] found id: ""
	I1213 10:41:31.151163  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.151171  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:41:31.151177  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:41:31.151244  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:41:31.181236  359214 cri.go:89] found id: ""
	I1213 10:41:31.181257  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.181265  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:41:31.181270  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:41:31.181332  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:41:31.210269  359214 cri.go:89] found id: ""
	I1213 10:41:31.210283  359214 logs.go:282] 0 containers: []
	W1213 10:41:31.210303  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:41:31.210311  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:41:31.210325  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:41:31.227244  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:41:31.227261  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:41:31.293720  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:41:31.285094   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.285961   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.287612   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.287962   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.289354   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:41:31.285094   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.285961   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.287612   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.287962   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:41:31.289354   17168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:41:31.293731  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:41:31.293745  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:41:31.357626  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:41:31.357648  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:41:31.386271  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:41:31.386288  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:41:33.948226  359214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:41:33.958367  359214 kubeadm.go:602] duration metric: took 4m4.333187147s to restartPrimaryControlPlane
	W1213 10:41:33.958431  359214 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 10:41:33.958502  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 10:41:34.375262  359214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:41:34.388893  359214 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:41:34.396960  359214 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:41:34.397012  359214 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:41:34.404696  359214 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:41:34.404706  359214 kubeadm.go:158] found existing configuration files:
	
	I1213 10:41:34.404755  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:41:34.412350  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:41:34.412405  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:41:34.419971  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:41:34.427828  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:41:34.427887  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:41:34.435644  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:41:34.443354  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:41:34.443408  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:41:34.451024  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:41:34.458860  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:41:34.458918  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:41:34.466249  359214 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:41:34.504797  359214 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:41:34.504845  359214 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:41:34.587434  359214 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:41:34.587499  359214 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:41:34.587534  359214 kubeadm.go:319] OS: Linux
	I1213 10:41:34.587577  359214 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:41:34.587624  359214 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:41:34.587670  359214 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:41:34.587717  359214 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:41:34.587764  359214 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:41:34.587816  359214 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:41:34.587860  359214 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:41:34.587906  359214 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:41:34.587951  359214 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:41:34.656000  359214 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:41:34.656112  359214 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:41:34.656196  359214 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:41:34.661831  359214 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:41:34.665544  359214 out.go:252]   - Generating certificates and keys ...
	I1213 10:41:34.665620  359214 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:41:34.665681  359214 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:41:34.665752  359214 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:41:34.665808  359214 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:41:34.665873  359214 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:41:34.665922  359214 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:41:34.665981  359214 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:41:34.666037  359214 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:41:34.666107  359214 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:41:34.666174  359214 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:41:34.666208  359214 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:41:34.666259  359214 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:41:35.121283  359214 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:41:35.663053  359214 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:41:35.746928  359214 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:41:35.962879  359214 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:41:36.165716  359214 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:41:36.166361  359214 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:41:36.169355  359214 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:41:36.172503  359214 out.go:252]   - Booting up control plane ...
	I1213 10:41:36.172623  359214 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:41:36.172875  359214 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:41:36.174488  359214 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:41:36.195010  359214 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:41:36.195108  359214 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:41:36.203505  359214 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:41:36.203828  359214 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:41:36.204072  359214 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:41:36.339853  359214 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:41:36.339968  359214 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:45:36.340589  359214 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00099636s
	I1213 10:45:36.340614  359214 kubeadm.go:319] 
	I1213 10:45:36.340667  359214 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:45:36.340697  359214 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:45:36.340795  359214 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:45:36.340800  359214 kubeadm.go:319] 
	I1213 10:45:36.340897  359214 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:45:36.340926  359214 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:45:36.340953  359214 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:45:36.340956  359214 kubeadm.go:319] 
	I1213 10:45:36.344674  359214 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:45:36.345121  359214 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:45:36.345236  359214 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:45:36.345471  359214 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:45:36.345476  359214 kubeadm.go:319] 
	I1213 10:45:36.345548  359214 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 10:45:36.345669  359214 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00099636s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 10:45:36.345754  359214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 10:45:36.752142  359214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:45:36.765694  359214 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:45:36.765753  359214 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:45:36.773442  359214 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:45:36.773451  359214 kubeadm.go:158] found existing configuration files:
	
	I1213 10:45:36.773504  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:45:36.781648  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:45:36.781706  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:45:36.789406  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:45:36.797582  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:45:36.797641  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:45:36.805463  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:45:36.813325  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:45:36.813378  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:45:36.820926  359214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:45:36.828930  359214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:45:36.828988  359214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:45:36.836622  359214 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:45:36.877023  359214 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:45:36.877075  359214 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:45:36.946303  359214 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:45:36.946364  359214 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:45:36.946398  359214 kubeadm.go:319] OS: Linux
	I1213 10:45:36.946444  359214 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:45:36.946489  359214 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:45:36.946532  359214 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:45:36.946576  359214 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:45:36.946620  359214 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:45:36.946665  359214 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:45:36.946727  359214 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:45:36.946771  359214 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:45:36.946813  359214 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:45:37.023251  359214 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:45:37.023367  359214 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:45:37.023453  359214 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:45:37.035188  359214 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:45:37.040505  359214 out.go:252]   - Generating certificates and keys ...
	I1213 10:45:37.040588  359214 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:45:37.040657  359214 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:45:37.040732  359214 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:45:37.040792  359214 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:45:37.040860  359214 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:45:37.040912  359214 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:45:37.040974  359214 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:45:37.041034  359214 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:45:37.041112  359214 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:45:37.041183  359214 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:45:37.041219  359214 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:45:37.041274  359214 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:45:37.085508  359214 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:45:37.524146  359214 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:45:37.643175  359214 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:45:38.077377  359214 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:45:38.482147  359214 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:45:38.482682  359214 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:45:38.485202  359214 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:45:38.490562  359214 out.go:252]   - Booting up control plane ...
	I1213 10:45:38.490673  359214 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:45:38.490778  359214 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:45:38.490854  359214 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:45:38.510040  359214 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:45:38.510136  359214 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:45:38.518983  359214 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:45:38.519096  359214 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:45:38.519153  359214 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:45:38.652209  359214 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:45:38.652350  359214 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:49:38.651567  359214 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001187482s
	I1213 10:49:38.651592  359214 kubeadm.go:319] 
	I1213 10:49:38.651654  359214 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:49:38.651686  359214 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:49:38.651792  359214 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:49:38.651797  359214 kubeadm.go:319] 
	I1213 10:49:38.651939  359214 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:49:38.651995  359214 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:49:38.652034  359214 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:49:38.652037  359214 kubeadm.go:319] 
	I1213 10:49:38.656860  359214 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:49:38.657251  359214 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:49:38.657352  359214 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:49:38.657572  359214 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:49:38.657576  359214 kubeadm.go:319] 
	I1213 10:49:38.657639  359214 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:49:38.657718  359214 kubeadm.go:403] duration metric: took 12m9.068082439s to StartCluster
	I1213 10:49:38.657750  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:49:38.657821  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:49:38.689768  359214 cri.go:89] found id: ""
	I1213 10:49:38.689783  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.689798  359214 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:49:38.689803  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:49:38.689865  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:49:38.719427  359214 cri.go:89] found id: ""
	I1213 10:49:38.719441  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.719449  359214 logs.go:284] No container was found matching "etcd"
	I1213 10:49:38.719455  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:49:38.719513  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:49:38.747452  359214 cri.go:89] found id: ""
	I1213 10:49:38.747466  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.747474  359214 logs.go:284] No container was found matching "coredns"
	I1213 10:49:38.747480  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:49:38.747544  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:49:38.772270  359214 cri.go:89] found id: ""
	I1213 10:49:38.772286  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.772293  359214 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:49:38.772298  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:49:38.772358  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:49:38.796548  359214 cri.go:89] found id: ""
	I1213 10:49:38.796562  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.796570  359214 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:49:38.796575  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:49:38.796633  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:49:38.825383  359214 cri.go:89] found id: ""
	I1213 10:49:38.825397  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.825404  359214 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:49:38.825410  359214 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:49:38.825467  359214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:49:38.854743  359214 cri.go:89] found id: ""
	I1213 10:49:38.854758  359214 logs.go:282] 0 containers: []
	W1213 10:49:38.854765  359214 logs.go:284] No container was found matching "kindnet"
	I1213 10:49:38.854775  359214 logs.go:123] Gathering logs for kubelet ...
	I1213 10:49:38.854785  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:49:38.911438  359214 logs.go:123] Gathering logs for dmesg ...
	I1213 10:49:38.911459  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:49:38.928194  359214 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:49:38.928212  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:49:38.993056  359214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:49:38.985025   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.985836   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.987445   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.987763   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.989301   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:49:38.985025   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.985836   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.987445   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.987763   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:38.989301   20997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:49:38.993068  359214 logs.go:123] Gathering logs for containerd ...
	I1213 10:49:38.993079  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:49:39.059560  359214 logs.go:123] Gathering logs for container status ...
	I1213 10:49:39.059584  359214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:49:39.090490  359214 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:49:39.090521  359214 out.go:285] * 
	W1213 10:49:39.090586  359214 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:49:39.090603  359214 out.go:285] * 
	W1213 10:49:39.092733  359214 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:49:39.097735  359214 out.go:203] 
	W1213 10:49:39.101721  359214 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001187482s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:49:39.101772  359214 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:49:39.101799  359214 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:49:39.104924  359214 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861227644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861318114Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861438764Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861513571Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861578449Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861642483Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861707304Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861776350Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861845545Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.861934818Z" level=info msg="Connect containerd service"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.862289545Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.862951451Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.874919104Z" level=info msg="Start subscribing containerd event"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.875103516Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.875569851Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.881349344Z" level=info msg="Start recovering state"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.920785039Z" level=info msg="Start event monitor"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921012364Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921112731Z" level=info msg="Start streaming server"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921198171Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921421730Z" level=info msg="runtime interface starting up..."
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921496201Z" level=info msg="starting plugins..."
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.921561104Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:37:27 functional-652709 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 10:37:27 functional-652709 containerd[9722]: time="2025-12-13T10:37:27.922785206Z" level=info msg="containerd successfully booted in 0.088911s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:50.887619   22614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:50.888318   22614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:50.890026   22614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:50.890588   22614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:50.892191   22614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +12.907693] overlayfs: idmapped layers are currently not supported
	[Dec13 09:25] overlayfs: idmapped layers are currently not supported
	[ +26.192425] overlayfs: idmapped layers are currently not supported
	[Dec13 09:26] overlayfs: idmapped layers are currently not supported
	[ +25.729788] overlayfs: idmapped layers are currently not supported
	[Dec13 09:27] overlayfs: idmapped layers are currently not supported
	[Dec13 09:28] overlayfs: idmapped layers are currently not supported
	[Dec13 09:31] overlayfs: idmapped layers are currently not supported
	[Dec13 09:32] overlayfs: idmapped layers are currently not supported
	[ +17.979093] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec13 09:43] overlayfs: idmapped layers are currently not supported
	[Dec13 09:45] overlayfs: idmapped layers are currently not supported
	[ +25.885102] overlayfs: idmapped layers are currently not supported
	[Dec13 09:46] overlayfs: idmapped layers are currently not supported
	[ +22.078149] overlayfs: idmapped layers are currently not supported
	[Dec13 09:47] overlayfs: idmapped layers are currently not supported
	[Dec13 09:48] overlayfs: idmapped layers are currently not supported
	[Dec13 09:49] overlayfs: idmapped layers are currently not supported
	[Dec13 09:51] overlayfs: idmapped layers are currently not supported
	[ +17.043564] overlayfs: idmapped layers are currently not supported
	[Dec13 09:52] overlayfs: idmapped layers are currently not supported
	[Dec13 09:53] overlayfs: idmapped layers are currently not supported
	[Dec13 10:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:19] hrtimer: interrupt took 21247146 ns
	
	
	==> kernel <==
	 10:51:50 up  3:34,  0 user,  load average: 0.11, 0.19, 0.44
	Linux functional-652709 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:51:47 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:51:48 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 493.
	Dec 13 10:51:48 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:51:48 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:51:48 functional-652709 kubelet[22499]: E1213 10:51:48.460733   22499 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:51:48 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:51:48 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:51:49 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 494.
	Dec 13 10:51:49 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:51:49 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:51:49 functional-652709 kubelet[22504]: E1213 10:51:49.210516   22504 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:51:49 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:51:49 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:51:49 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 495.
	Dec 13 10:51:49 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:51:49 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:51:49 functional-652709 kubelet[22521]: E1213 10:51:49.991880   22521 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:51:49 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:51:49 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:51:50 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 496.
	Dec 13 10:51:50 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:51:50 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:51:50 functional-652709 kubelet[22571]: E1213 10:51:50.720868   22571 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:51:50 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:51:50 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709: exit status 2 (353.722341ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-652709" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 10:49:57.481134  308915 retry.go:31] will retry after 3.132091892s: Temporary Error: Get "http://10.97.13.246": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 10:50:10.613897  308915 retry.go:31] will retry after 3.417245439s: Temporary Error: Get "http://10.97.13.246": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1213 10:50:12.241397  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 10:50:24.031948  308915 retry.go:31] will retry after 7.65682524s: Temporary Error: Get "http://10.97.13.246": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 10:50:41.689900  308915 retry.go:31] will retry after 7.583851076s: Temporary Error: Get "http://10.97.13.246": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 10:50:59.274079  308915 retry.go:31] will retry after 12.671333498s: Temporary Error: Get "http://10.97.13.246": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 10:51:21.945882  308915 retry.go:31] will retry after 17.170071804s: Temporary Error: Get "http://10.97.13.246": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1213 10:51:48.080534  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1213 10:53:15.317239  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709: exit status 2 (307.04888ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-652709" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-652709
helpers_test.go:244: (dbg) docker inspect functional-652709:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f",
	        "Created": "2025-12-13T10:22:44.366993781Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 347931,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:22:44.437030763Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/hosts",
	        "LogPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f-json.log",
	        "Name": "/functional-652709",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-652709:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-652709",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f",
	                "LowerDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095-init/diff:/var/lib/docker/overlay2/5d0ca86320f2c06765995d644a73af30cd6ddb36f5c1a2a6b1ebb24af53cdabd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/merged",
	                "UpperDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/diff",
	                "WorkDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-652709",
	                "Source": "/var/lib/docker/volumes/functional-652709/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-652709",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-652709",
	                "name.minikube.sigs.k8s.io": "functional-652709",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "52e527b5bd789a02eb7efb651200033ed4929e5fc7545e9df042d3f777cc9782",
	            "SandboxKey": "/var/run/docker/netns/52e527b5bd78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-652709": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:23:08:9e:cb:13",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "344f2b940117dadb28d1ef1328f911c0446307288fdfafebfe59f38e473f79cb",
	                    "EndpointID": "8954f96e5987202be5715e7023384fe862744778b2520bccba28c57814f0980f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-652709",
	                        "0f6101071ca2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-652709 -n functional-652709
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-652709 -n functional-652709: exit status 2 (326.104538ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-652709 image load --daemon kicbase/echo-server:functional-652709 --alsologtostderr                                                                   │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ image          │ functional-652709 image ls                                                                                                                                      │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ image          │ functional-652709 image save kicbase/echo-server:functional-652709 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ image          │ functional-652709 image rm kicbase/echo-server:functional-652709 --alsologtostderr                                                                              │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ image          │ functional-652709 image ls                                                                                                                                      │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ image          │ functional-652709 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ image          │ functional-652709 image ls                                                                                                                                      │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ image          │ functional-652709 image save --daemon kicbase/echo-server:functional-652709 --alsologtostderr                                                                   │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ ssh            │ functional-652709 ssh sudo cat /etc/test/nested/copy/308915/hosts                                                                                               │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ ssh            │ functional-652709 ssh sudo cat /etc/ssl/certs/308915.pem                                                                                                        │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ ssh            │ functional-652709 ssh sudo cat /usr/share/ca-certificates/308915.pem                                                                                            │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ ssh            │ functional-652709 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ ssh            │ functional-652709 ssh sudo cat /etc/ssl/certs/3089152.pem                                                                                                       │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ ssh            │ functional-652709 ssh sudo cat /usr/share/ca-certificates/3089152.pem                                                                                           │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ ssh            │ functional-652709 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ image          │ functional-652709 image ls --format short --alsologtostderr                                                                                                     │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ image          │ functional-652709 image ls --format yaml --alsologtostderr                                                                                                      │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ ssh            │ functional-652709 ssh pgrep buildkitd                                                                                                                           │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │                     │
	│ image          │ functional-652709 image build -t localhost/my-image:functional-652709 testdata/build --alsologtostderr                                                          │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ image          │ functional-652709 image ls                                                                                                                                      │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ image          │ functional-652709 image ls --format json --alsologtostderr                                                                                                      │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ image          │ functional-652709 image ls --format table --alsologtostderr                                                                                                     │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ update-context │ functional-652709 update-context --alsologtostderr -v=2                                                                                                         │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ update-context │ functional-652709 update-context --alsologtostderr -v=2                                                                                                         │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ update-context │ functional-652709 update-context --alsologtostderr -v=2                                                                                                         │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:52:06
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:52:06.450755  376758 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:52:06.450873  376758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:52:06.450913  376758 out.go:374] Setting ErrFile to fd 2...
	I1213 10:52:06.450919  376758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:52:06.451185  376758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 10:52:06.451584  376758 out.go:368] Setting JSON to false
	I1213 10:52:06.452450  376758 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12879,"bootTime":1765610247,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 10:52:06.452554  376758 start.go:143] virtualization:  
	I1213 10:52:06.457685  376758 out.go:179] * [functional-652709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:52:06.460878  376758 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:52:06.460951  376758 notify.go:221] Checking for updates...
	I1213 10:52:06.467700  376758 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:52:06.470736  376758 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:52:06.473591  376758 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 10:52:06.476443  376758 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:52:06.479409  376758 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:52:06.482813  376758 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:52:06.483365  376758 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:52:06.512382  376758 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:52:06.512519  376758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:52:06.577177  376758 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:52:06.565738318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:52:06.577300  376758 docker.go:319] overlay module found
	I1213 10:52:06.580486  376758 out.go:179] * Using the docker driver based on existing profile
	I1213 10:52:06.583327  376758 start.go:309] selected driver: docker
	I1213 10:52:06.583358  376758 start.go:927] validating driver "docker" against &{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:52:06.583472  376758 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:52:06.583587  376758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:52:06.648334  376758 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:52:06.639010814 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:52:06.648776  376758 cni.go:84] Creating CNI manager for ""
	I1213 10:52:06.648844  376758 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:52:06.648894  376758 start.go:353] cluster config:
	{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:52:06.652023  376758 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 10:52:11 functional-652709 containerd[9722]: time="2025-12-13T10:52:11.805911869Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:52:11 functional-652709 containerd[9722]: time="2025-12-13T10:52:11.806535391Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-652709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:52:12 functional-652709 containerd[9722]: time="2025-12-13T10:52:12.853340014Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-652709\""
	Dec 13 10:52:12 functional-652709 containerd[9722]: time="2025-12-13T10:52:12.856063578Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-652709\""
	Dec 13 10:52:12 functional-652709 containerd[9722]: time="2025-12-13T10:52:12.858792451Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 13 10:52:12 functional-652709 containerd[9722]: time="2025-12-13T10:52:12.868153407Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-652709\" returns successfully"
	Dec 13 10:52:13 functional-652709 containerd[9722]: time="2025-12-13T10:52:13.103714995Z" level=info msg="No images store for sha256:d5202eaf7c5c05866420a8e87f4b94738f6724d3a0af4c0126dd5782ae166ba5"
	Dec 13 10:52:13 functional-652709 containerd[9722]: time="2025-12-13T10:52:13.106765840Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-652709\""
	Dec 13 10:52:13 functional-652709 containerd[9722]: time="2025-12-13T10:52:13.113624535Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:52:13 functional-652709 containerd[9722]: time="2025-12-13T10:52:13.114000227Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-652709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:52:13 functional-652709 containerd[9722]: time="2025-12-13T10:52:13.922928004Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-652709\""
	Dec 13 10:52:13 functional-652709 containerd[9722]: time="2025-12-13T10:52:13.925540527Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-652709\""
	Dec 13 10:52:13 functional-652709 containerd[9722]: time="2025-12-13T10:52:13.928734898Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 13 10:52:13 functional-652709 containerd[9722]: time="2025-12-13T10:52:13.936994805Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-652709\" returns successfully"
	Dec 13 10:52:14 functional-652709 containerd[9722]: time="2025-12-13T10:52:14.627766596Z" level=info msg="No images store for sha256:d28e9eade065faa1f42a805507e403385dcb232e15f883e5d12433456d0d9625"
	Dec 13 10:52:14 functional-652709 containerd[9722]: time="2025-12-13T10:52:14.630156158Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-652709\""
	Dec 13 10:52:14 functional-652709 containerd[9722]: time="2025-12-13T10:52:14.640126089Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:52:14 functional-652709 containerd[9722]: time="2025-12-13T10:52:14.640812348Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-652709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:52:22 functional-652709 containerd[9722]: time="2025-12-13T10:52:22.546752094Z" level=info msg="connecting to shim tngwe2rmlcjs9ey2u6tqqkhjx" address="unix:///run/containerd/s/7c5522f2883d087caecf2bf37f237f4a63b3fda552de4edac4328a004e04d41b" namespace=k8s.io protocol=ttrpc version=3
	Dec 13 10:52:22 functional-652709 containerd[9722]: time="2025-12-13T10:52:22.617517158Z" level=info msg="shim disconnected" id=tngwe2rmlcjs9ey2u6tqqkhjx namespace=k8s.io
	Dec 13 10:52:22 functional-652709 containerd[9722]: time="2025-12-13T10:52:22.618356641Z" level=info msg="cleaning up after shim disconnected" id=tngwe2rmlcjs9ey2u6tqqkhjx namespace=k8s.io
	Dec 13 10:52:22 functional-652709 containerd[9722]: time="2025-12-13T10:52:22.618476158Z" level=info msg="cleaning up dead shim" id=tngwe2rmlcjs9ey2u6tqqkhjx namespace=k8s.io
	Dec 13 10:52:22 functional-652709 containerd[9722]: time="2025-12-13T10:52:22.917090255Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-652709\""
	Dec 13 10:52:22 functional-652709 containerd[9722]: time="2025-12-13T10:52:22.934733953Z" level=info msg="ImageCreate event name:\"sha256:a36d03f7ad105f0365fa04ea21effca6a15cb42d29181b995439b46b977d5500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:52:22 functional-652709 containerd[9722]: time="2025-12-13T10:52:22.935236547Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-652709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:49.303468   25184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:49.303994   25184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:49.305723   25184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:49.306594   25184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:49.308321   25184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +12.907693] overlayfs: idmapped layers are currently not supported
	[Dec13 09:25] overlayfs: idmapped layers are currently not supported
	[ +26.192425] overlayfs: idmapped layers are currently not supported
	[Dec13 09:26] overlayfs: idmapped layers are currently not supported
	[ +25.729788] overlayfs: idmapped layers are currently not supported
	[Dec13 09:27] overlayfs: idmapped layers are currently not supported
	[Dec13 09:28] overlayfs: idmapped layers are currently not supported
	[Dec13 09:31] overlayfs: idmapped layers are currently not supported
	[Dec13 09:32] overlayfs: idmapped layers are currently not supported
	[ +17.979093] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec13 09:43] overlayfs: idmapped layers are currently not supported
	[Dec13 09:45] overlayfs: idmapped layers are currently not supported
	[ +25.885102] overlayfs: idmapped layers are currently not supported
	[Dec13 09:46] overlayfs: idmapped layers are currently not supported
	[ +22.078149] overlayfs: idmapped layers are currently not supported
	[Dec13 09:47] overlayfs: idmapped layers are currently not supported
	[Dec13 09:48] overlayfs: idmapped layers are currently not supported
	[Dec13 09:49] overlayfs: idmapped layers are currently not supported
	[Dec13 09:51] overlayfs: idmapped layers are currently not supported
	[ +17.043564] overlayfs: idmapped layers are currently not supported
	[Dec13 09:52] overlayfs: idmapped layers are currently not supported
	[Dec13 09:53] overlayfs: idmapped layers are currently not supported
	[Dec13 10:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:19] hrtimer: interrupt took 21247146 ns
	
	
	==> kernel <==
	 10:53:49 up  3:36,  0 user,  load average: 0.45, 0.43, 0.51
	Linux functional-652709 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:53:46 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:53:46 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 651.
	Dec 13 10:53:46 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:53:46 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:53:46 functional-652709 kubelet[25053]: E1213 10:53:46.957704   25053 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:53:46 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:53:46 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:53:47 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 652.
	Dec 13 10:53:47 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:53:47 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:53:47 functional-652709 kubelet[25058]: E1213 10:53:47.706229   25058 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:53:47 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:53:47 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:53:48 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 653.
	Dec 13 10:53:48 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:53:48 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:53:48 functional-652709 kubelet[25079]: E1213 10:53:48.474249   25079 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:53:48 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:53:48 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:53:49 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 654.
	Dec 13 10:53:49 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:53:49 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:53:49 functional-652709 kubelet[25164]: E1213 10:53:49.209349   25164 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:53:49 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:53:49 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709: exit status 2 (313.362469ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-652709" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-652709 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-652709 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (58.838298ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-652709 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-652709
helpers_test.go:244: (dbg) docker inspect functional-652709:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f",
	        "Created": "2025-12-13T10:22:44.366993781Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 347931,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:22:44.437030763Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/hosts",
	        "LogPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f-json.log",
	        "Name": "/functional-652709",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-652709:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-652709",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f",
	                "LowerDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095-init/diff:/var/lib/docker/overlay2/5d0ca86320f2c06765995d644a73af30cd6ddb36f5c1a2a6b1ebb24af53cdabd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/merged",
	                "UpperDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/diff",
	                "WorkDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-652709",
	                "Source": "/var/lib/docker/volumes/functional-652709/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-652709",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-652709",
	                "name.minikube.sigs.k8s.io": "functional-652709",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "52e527b5bd789a02eb7efb651200033ed4929e5fc7545e9df042d3f777cc9782",
	            "SandboxKey": "/var/run/docker/netns/52e527b5bd78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-652709": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:23:08:9e:cb:13",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "344f2b940117dadb28d1ef1328f911c0446307288fdfafebfe59f38e473f79cb",
	                    "EndpointID": "8954f96e5987202be5715e7023384fe862744778b2520bccba28c57814f0980f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-652709",
	                        "0f6101071ca2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-652709 -n functional-652709
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-652709 -n functional-652709: exit status 2 (341.235593ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-652709 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1116597745/001:/mount3 --alsologtostderr -v=1                            │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │                     │
	│ mount     │ -p functional-652709 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1116597745/001:/mount2 --alsologtostderr -v=1                            │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │                     │
	│ ssh       │ functional-652709 ssh findmnt -T /mount1                                                                                                                        │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ ssh       │ functional-652709 ssh findmnt -T /mount2                                                                                                                        │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ ssh       │ functional-652709 ssh findmnt -T /mount3                                                                                                                        │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ mount     │ -p functional-652709 --kill=true                                                                                                                                │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │                     │
	│ start     │ -p functional-652709 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0             │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │                     │
	│ start     │ -p functional-652709 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0             │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │                     │
	│ start     │ -p functional-652709 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                       │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-652709 --alsologtostderr -v=1                                                                                                  │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │                     │
	│ license   │                                                                                                                                                                 │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ ssh       │ functional-652709 ssh sudo systemctl is-active docker                                                                                                           │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │                     │
	│ ssh       │ functional-652709 ssh sudo systemctl is-active crio                                                                                                             │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │                     │
	│ image     │ functional-652709 image load --daemon kicbase/echo-server:functional-652709 --alsologtostderr                                                                   │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ image     │ functional-652709 image ls                                                                                                                                      │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ image     │ functional-652709 image load --daemon kicbase/echo-server:functional-652709 --alsologtostderr                                                                   │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ image     │ functional-652709 image ls                                                                                                                                      │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ image     │ functional-652709 image load --daemon kicbase/echo-server:functional-652709 --alsologtostderr                                                                   │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ image     │ functional-652709 image ls                                                                                                                                      │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ image     │ functional-652709 image save kicbase/echo-server:functional-652709 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ image     │ functional-652709 image rm kicbase/echo-server:functional-652709 --alsologtostderr                                                                              │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ image     │ functional-652709 image ls                                                                                                                                      │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ image     │ functional-652709 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ image     │ functional-652709 image ls                                                                                                                                      │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	│ image     │ functional-652709 image save --daemon kicbase/echo-server:functional-652709 --alsologtostderr                                                                   │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:52 UTC │ 13 Dec 25 10:52 UTC │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:52:06
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:52:06.450755  376758 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:52:06.450873  376758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:52:06.450913  376758 out.go:374] Setting ErrFile to fd 2...
	I1213 10:52:06.450919  376758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:52:06.451185  376758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 10:52:06.451584  376758 out.go:368] Setting JSON to false
	I1213 10:52:06.452450  376758 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12879,"bootTime":1765610247,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 10:52:06.452554  376758 start.go:143] virtualization:  
	I1213 10:52:06.457685  376758 out.go:179] * [functional-652709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:52:06.460878  376758 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:52:06.460951  376758 notify.go:221] Checking for updates...
	I1213 10:52:06.467700  376758 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:52:06.470736  376758 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:52:06.473591  376758 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 10:52:06.476443  376758 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:52:06.479409  376758 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:52:06.482813  376758 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:52:06.483365  376758 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:52:06.512382  376758 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:52:06.512519  376758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:52:06.577177  376758 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:52:06.565738318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:52:06.577300  376758 docker.go:319] overlay module found
	I1213 10:52:06.580486  376758 out.go:179] * Using the docker driver based on existing profile
	I1213 10:52:06.583327  376758 start.go:309] selected driver: docker
	I1213 10:52:06.583358  376758 start.go:927] validating driver "docker" against &{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:52:06.583472  376758 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:52:06.583587  376758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:52:06.648334  376758 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:52:06.639010814 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:52:06.648776  376758 cni.go:84] Creating CNI manager for ""
	I1213 10:52:06.648844  376758 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:52:06.648894  376758 start.go:353] cluster config:
	{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:52:06.652023  376758 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 10:52:10 functional-652709 containerd[9722]: time="2025-12-13T10:52:10.724777659Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-652709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:52:11 functional-652709 containerd[9722]: time="2025-12-13T10:52:11.544881149Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-652709\""
	Dec 13 10:52:11 functional-652709 containerd[9722]: time="2025-12-13T10:52:11.547560766Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-652709\""
	Dec 13 10:52:11 functional-652709 containerd[9722]: time="2025-12-13T10:52:11.549779094Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 13 10:52:11 functional-652709 containerd[9722]: time="2025-12-13T10:52:11.558608599Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-652709\" returns successfully"
	Dec 13 10:52:11 functional-652709 containerd[9722]: time="2025-12-13T10:52:11.794178794Z" level=info msg="No images store for sha256:d5202eaf7c5c05866420a8e87f4b94738f6724d3a0af4c0126dd5782ae166ba5"
	Dec 13 10:52:11 functional-652709 containerd[9722]: time="2025-12-13T10:52:11.796180899Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-652709\""
	Dec 13 10:52:11 functional-652709 containerd[9722]: time="2025-12-13T10:52:11.805911869Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:52:11 functional-652709 containerd[9722]: time="2025-12-13T10:52:11.806535391Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-652709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:52:12 functional-652709 containerd[9722]: time="2025-12-13T10:52:12.853340014Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-652709\""
	Dec 13 10:52:12 functional-652709 containerd[9722]: time="2025-12-13T10:52:12.856063578Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-652709\""
	Dec 13 10:52:12 functional-652709 containerd[9722]: time="2025-12-13T10:52:12.858792451Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 13 10:52:12 functional-652709 containerd[9722]: time="2025-12-13T10:52:12.868153407Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-652709\" returns successfully"
	Dec 13 10:52:13 functional-652709 containerd[9722]: time="2025-12-13T10:52:13.103714995Z" level=info msg="No images store for sha256:d5202eaf7c5c05866420a8e87f4b94738f6724d3a0af4c0126dd5782ae166ba5"
	Dec 13 10:52:13 functional-652709 containerd[9722]: time="2025-12-13T10:52:13.106765840Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-652709\""
	Dec 13 10:52:13 functional-652709 containerd[9722]: time="2025-12-13T10:52:13.113624535Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:52:13 functional-652709 containerd[9722]: time="2025-12-13T10:52:13.114000227Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-652709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:52:13 functional-652709 containerd[9722]: time="2025-12-13T10:52:13.922928004Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-652709\""
	Dec 13 10:52:13 functional-652709 containerd[9722]: time="2025-12-13T10:52:13.925540527Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-652709\""
	Dec 13 10:52:13 functional-652709 containerd[9722]: time="2025-12-13T10:52:13.928734898Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 13 10:52:13 functional-652709 containerd[9722]: time="2025-12-13T10:52:13.936994805Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-652709\" returns successfully"
	Dec 13 10:52:14 functional-652709 containerd[9722]: time="2025-12-13T10:52:14.627766596Z" level=info msg="No images store for sha256:d28e9eade065faa1f42a805507e403385dcb232e15f883e5d12433456d0d9625"
	Dec 13 10:52:14 functional-652709 containerd[9722]: time="2025-12-13T10:52:14.630156158Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-652709\""
	Dec 13 10:52:14 functional-652709 containerd[9722]: time="2025-12-13T10:52:14.640126089Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 10:52:14 functional-652709 containerd[9722]: time="2025-12-13T10:52:14.640812348Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-652709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:16.347828   24007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:16.348561   24007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:16.350187   24007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:16.350768   24007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:16.352326   24007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +12.907693] overlayfs: idmapped layers are currently not supported
	[Dec13 09:25] overlayfs: idmapped layers are currently not supported
	[ +26.192425] overlayfs: idmapped layers are currently not supported
	[Dec13 09:26] overlayfs: idmapped layers are currently not supported
	[ +25.729788] overlayfs: idmapped layers are currently not supported
	[Dec13 09:27] overlayfs: idmapped layers are currently not supported
	[Dec13 09:28] overlayfs: idmapped layers are currently not supported
	[Dec13 09:31] overlayfs: idmapped layers are currently not supported
	[Dec13 09:32] overlayfs: idmapped layers are currently not supported
	[ +17.979093] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec13 09:43] overlayfs: idmapped layers are currently not supported
	[Dec13 09:45] overlayfs: idmapped layers are currently not supported
	[ +25.885102] overlayfs: idmapped layers are currently not supported
	[Dec13 09:46] overlayfs: idmapped layers are currently not supported
	[ +22.078149] overlayfs: idmapped layers are currently not supported
	[Dec13 09:47] overlayfs: idmapped layers are currently not supported
	[Dec13 09:48] overlayfs: idmapped layers are currently not supported
	[Dec13 09:49] overlayfs: idmapped layers are currently not supported
	[Dec13 09:51] overlayfs: idmapped layers are currently not supported
	[ +17.043564] overlayfs: idmapped layers are currently not supported
	[Dec13 09:52] overlayfs: idmapped layers are currently not supported
	[Dec13 09:53] overlayfs: idmapped layers are currently not supported
	[Dec13 10:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:19] hrtimer: interrupt took 21247146 ns
	
	
	==> kernel <==
	 10:52:16 up  3:34,  0 user,  load average: 1.29, 0.48, 0.53
	Linux functional-652709 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:52:13 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:52:13 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 527.
	Dec 13 10:52:13 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:13 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:13 functional-652709 kubelet[23800]: E1213 10:52:13.982924   23800 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:52:13 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:52:13 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:52:14 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 528.
	Dec 13 10:52:14 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:14 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:14 functional-652709 kubelet[23851]: E1213 10:52:14.683272   23851 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:52:14 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:52:14 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:52:15 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 529.
	Dec 13 10:52:15 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:15 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:15 functional-652709 kubelet[23906]: E1213 10:52:15.480388   23906 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:52:15 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:52:15 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:52:16 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 530.
	Dec 13 10:52:16 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:16 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:16 functional-652709 kubelet[23979]: E1213 10:52:16.232361   23979 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:52:16 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:52:16 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709: exit status 2 (314.648123ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-652709" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-652709 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-652709 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1213 10:49:46.929841  372401 out.go:360] Setting OutFile to fd 1 ...
I1213 10:49:46.929956  372401 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:49:46.929968  372401 out.go:374] Setting ErrFile to fd 2...
I1213 10:49:46.929974  372401 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:49:46.930333  372401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
I1213 10:49:46.930932  372401 mustload.go:66] Loading cluster: functional-652709
I1213 10:49:46.931404  372401 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 10:49:46.931880  372401 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
I1213 10:49:46.951967  372401 host.go:66] Checking if "functional-652709" exists ...
I1213 10:49:46.952293  372401 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1213 10:49:47.082106  372401 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:49:47.071088584 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1213 10:49:47.082225  372401 api_server.go:166] Checking apiserver status ...
I1213 10:49:47.082288  372401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1213 10:49:47.082331  372401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
I1213 10:49:47.114599  372401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
W1213 10:49:47.238452  372401 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1213 10:49:47.241924  372401 out.go:179] * The control-plane node functional-652709 apiserver is not running: (state=Stopped)
I1213 10:49:47.245035  372401 out.go:179]   To start a cluster, run: "minikube start -p functional-652709"

                                                
                                                
stdout: * The control-plane node functional-652709 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-652709"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-652709 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-652709 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-652709 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-652709 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 372400: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-652709 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-652709 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-652709 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-652709 apply -f testdata/testsvc.yaml: exit status 1 (116.879128ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-652709 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (121.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.97.13.246": Temporary Error: Get "http://10.97.13.246": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-652709 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-652709 get svc nginx-svc: exit status 1 (67.830596ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-652709 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (121.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-652709 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-652709 create deployment hello-node --image kicbase/echo-server: exit status 1 (56.615319ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-652709 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-652709 service list: exit status 103 (271.913577ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-652709 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-652709"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-652709 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-652709 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-652709\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-652709 service list -o json: exit status 103 (266.636386ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-652709 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-652709"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-652709 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-652709 service --namespace=default --https --url hello-node: exit status 103 (258.938963ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-652709 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-652709"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-652709 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-652709 service hello-node --url --format={{.IP}}: exit status 103 (281.229618ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-652709 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-652709"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-652709 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-652709 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-652709\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-652709 service hello-node --url: exit status 103 (264.510572ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-652709 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-652709"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-652709 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-652709 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-652709"
functional_test.go:1579: failed to parse "* The control-plane node functional-652709 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-652709\"": parse "* The control-plane node functional-652709 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-652709\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-652709 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2540388801/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765623116872019606" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2540388801/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765623116872019606" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2540388801/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765623116872019606" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2540388801/001/test-1765623116872019606
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-652709 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (337.925814ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 10:51:57.210194  308915 retry.go:31] will retry after 612.420407ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 10:51 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 10:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 10:51 test-1765623116872019606
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh cat /mount-9p/test-1765623116872019606
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-652709 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-652709 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (58.499424ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-652709 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-652709 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (276.407044ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=41465)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 13 10:51 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 13 10:51 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 13 10:51 test-1765623116872019606
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-652709 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-652709 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2540388801/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-652709 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2540388801/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2540388801/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:41465
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2540388801/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-652709 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2540388801/001:/mount-9p --alsologtostderr -v=1] stderr:
I1213 10:51:56.932873  374878 out.go:360] Setting OutFile to fd 1 ...
I1213 10:51:56.933116  374878 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:51:56.933136  374878 out.go:374] Setting ErrFile to fd 2...
I1213 10:51:56.933153  374878 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:51:56.933430  374878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
I1213 10:51:56.933706  374878 mustload.go:66] Loading cluster: functional-652709
I1213 10:51:56.934101  374878 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 10:51:56.934708  374878 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
I1213 10:51:56.952425  374878 host.go:66] Checking if "functional-652709" exists ...
I1213 10:51:56.952730  374878 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1213 10:51:57.027392  374878 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:51:57.012450774 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1213 10:51:57.027553  374878 cli_runner.go:164] Run: docker network inspect functional-652709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1213 10:51:57.067729  374878 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2540388801/001 into VM as /mount-9p ...
I1213 10:51:57.070649  374878 out.go:179]   - Mount type:   9p
I1213 10:51:57.073526  374878 out.go:179]   - User ID:      docker
I1213 10:51:57.076336  374878 out.go:179]   - Group ID:     docker
I1213 10:51:57.079182  374878 out.go:179]   - Version:      9p2000.L
I1213 10:51:57.082097  374878 out.go:179]   - Message Size: 262144
I1213 10:51:57.084928  374878 out.go:179]   - Options:      map[]
I1213 10:51:57.087676  374878 out.go:179]   - Bind Address: 192.168.49.1:41465
I1213 10:51:57.090532  374878 out.go:179] * Userspace file server: 
I1213 10:51:57.090862  374878 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1213 10:51:57.090971  374878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
I1213 10:51:57.121762  374878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
I1213 10:51:57.237412  374878 mount.go:180] unmount for /mount-9p ran successfully
I1213 10:51:57.237459  374878 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1213 10:51:57.246022  374878 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=41465,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1213 10:51:57.256456  374878 main.go:127] stdlog: ufs.go:141 connected
I1213 10:51:57.256627  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tversion tag 65535 msize 262144 version '9P2000.L'
I1213 10:51:57.256658  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rversion tag 65535 msize 262144 version '9P2000'
I1213 10:51:57.256885  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1213 10:51:57.256946  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rattach tag 0 aqid (44319 1756b446 'd')
I1213 10:51:57.257694  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tstat tag 0 fid 0
I1213 10:51:57.257757  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (44319 1756b446 'd') m d775 at 0 mt 1765623116 l 4096 t 0 d 0 ext )
I1213 10:51:57.262958  374878 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/.mount-process: {Name:mk52912dd8d19beb03f9b9799cdd3aefed791565 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 10:51:57.263156  374878 mount.go:105] mount successful: ""
I1213 10:51:57.266581  374878 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2540388801/001 to /mount-9p
I1213 10:51:57.269442  374878 out.go:203] 
I1213 10:51:57.272264  374878 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1213 10:51:58.403363  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tstat tag 0 fid 0
I1213 10:51:58.403447  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (44319 1756b446 'd') m d775 at 0 mt 1765623116 l 4096 t 0 d 0 ext )
I1213 10:51:58.403829  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Twalk tag 0 fid 0 newfid 1 
I1213 10:51:58.403868  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rwalk tag 0 
I1213 10:51:58.404018  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Topen tag 0 fid 1 mode 0
I1213 10:51:58.404079  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Ropen tag 0 qid (44319 1756b446 'd') iounit 0
I1213 10:51:58.404241  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tstat tag 0 fid 0
I1213 10:51:58.404300  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (44319 1756b446 'd') m d775 at 0 mt 1765623116 l 4096 t 0 d 0 ext )
I1213 10:51:58.404460  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tread tag 0 fid 1 offset 0 count 262120
I1213 10:51:58.404600  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rread tag 0 count 258
I1213 10:51:58.404758  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tread tag 0 fid 1 offset 258 count 261862
I1213 10:51:58.404792  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rread tag 0 count 0
I1213 10:51:58.404979  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tread tag 0 fid 1 offset 258 count 262120
I1213 10:51:58.405019  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rread tag 0 count 0
I1213 10:51:58.405171  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1213 10:51:58.405236  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rwalk tag 0 (4431b 1756b446 '') 
I1213 10:51:58.405391  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tstat tag 0 fid 2
I1213 10:51:58.405463  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (4431b 1756b446 '') m 644 at 0 mt 1765623116 l 24 t 0 d 0 ext )
I1213 10:51:58.405625  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tstat tag 0 fid 2
I1213 10:51:58.405687  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (4431b 1756b446 '') m 644 at 0 mt 1765623116 l 24 t 0 d 0 ext )
I1213 10:51:58.405849  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tclunk tag 0 fid 2
I1213 10:51:58.405897  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rclunk tag 0
I1213 10:51:58.406086  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Twalk tag 0 fid 0 newfid 2 0:'test-1765623116872019606' 
I1213 10:51:58.406129  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rwalk tag 0 (4431d 1756b446 '') 
I1213 10:51:58.406326  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tstat tag 0 fid 2
I1213 10:51:58.406364  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rstat tag 0 st ('test-1765623116872019606' 'jenkins' 'jenkins' '' q (4431d 1756b446 '') m 644 at 0 mt 1765623116 l 24 t 0 d 0 ext )
I1213 10:51:58.406509  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tstat tag 0 fid 2
I1213 10:51:58.406557  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rstat tag 0 st ('test-1765623116872019606' 'jenkins' 'jenkins' '' q (4431d 1756b446 '') m 644 at 0 mt 1765623116 l 24 t 0 d 0 ext )
I1213 10:51:58.406721  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tclunk tag 0 fid 2
I1213 10:51:58.406760  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rclunk tag 0
I1213 10:51:58.406905  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1213 10:51:58.406959  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rwalk tag 0 (4431c 1756b446 '') 
I1213 10:51:58.407090  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tstat tag 0 fid 2
I1213 10:51:58.407130  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (4431c 1756b446 '') m 644 at 0 mt 1765623116 l 24 t 0 d 0 ext )
I1213 10:51:58.407255  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tstat tag 0 fid 2
I1213 10:51:58.407288  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (4431c 1756b446 '') m 644 at 0 mt 1765623116 l 24 t 0 d 0 ext )
I1213 10:51:58.407405  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tclunk tag 0 fid 2
I1213 10:51:58.407429  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rclunk tag 0
I1213 10:51:58.407539  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tread tag 0 fid 1 offset 258 count 262120
I1213 10:51:58.407567  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rread tag 0 count 0
I1213 10:51:58.407697  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tclunk tag 0 fid 1
I1213 10:51:58.407728  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rclunk tag 0
I1213 10:51:58.678735  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Twalk tag 0 fid 0 newfid 1 0:'test-1765623116872019606' 
I1213 10:51:58.678807  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rwalk tag 0 (4431d 1756b446 '') 
I1213 10:51:58.678986  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tstat tag 0 fid 1
I1213 10:51:58.679029  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rstat tag 0 st ('test-1765623116872019606' 'jenkins' 'jenkins' '' q (4431d 1756b446 '') m 644 at 0 mt 1765623116 l 24 t 0 d 0 ext )
I1213 10:51:58.679188  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Twalk tag 0 fid 1 newfid 2 
I1213 10:51:58.679229  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rwalk tag 0 
I1213 10:51:58.679346  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Topen tag 0 fid 2 mode 0
I1213 10:51:58.679414  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Ropen tag 0 qid (4431d 1756b446 '') iounit 0
I1213 10:51:58.679556  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tstat tag 0 fid 1
I1213 10:51:58.679599  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rstat tag 0 st ('test-1765623116872019606' 'jenkins' 'jenkins' '' q (4431d 1756b446 '') m 644 at 0 mt 1765623116 l 24 t 0 d 0 ext )
I1213 10:51:58.679739  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tread tag 0 fid 2 offset 0 count 262120
I1213 10:51:58.679804  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rread tag 0 count 24
I1213 10:51:58.679932  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tread tag 0 fid 2 offset 24 count 262120
I1213 10:51:58.679973  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rread tag 0 count 0
I1213 10:51:58.680108  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tread tag 0 fid 2 offset 24 count 262120
I1213 10:51:58.680145  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rread tag 0 count 0
I1213 10:51:58.680284  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tclunk tag 0 fid 2
I1213 10:51:58.680321  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rclunk tag 0
I1213 10:51:58.680525  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tclunk tag 0 fid 1
I1213 10:51:58.680555  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rclunk tag 0
I1213 10:51:59.018325  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tstat tag 0 fid 0
I1213 10:51:59.018404  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (44319 1756b446 'd') m d775 at 0 mt 1765623116 l 4096 t 0 d 0 ext )
I1213 10:51:59.018891  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Twalk tag 0 fid 0 newfid 1 
I1213 10:51:59.018954  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rwalk tag 0 
I1213 10:51:59.019119  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Topen tag 0 fid 1 mode 0
I1213 10:51:59.019177  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Ropen tag 0 qid (44319 1756b446 'd') iounit 0
I1213 10:51:59.019324  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tstat tag 0 fid 0
I1213 10:51:59.019359  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (44319 1756b446 'd') m d775 at 0 mt 1765623116 l 4096 t 0 d 0 ext )
I1213 10:51:59.019509  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tread tag 0 fid 1 offset 0 count 262120
I1213 10:51:59.019607  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rread tag 0 count 258
I1213 10:51:59.019783  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tread tag 0 fid 1 offset 258 count 261862
I1213 10:51:59.019808  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rread tag 0 count 0
I1213 10:51:59.019928  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tread tag 0 fid 1 offset 258 count 262120
I1213 10:51:59.019956  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rread tag 0 count 0
I1213 10:51:59.020092  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1213 10:51:59.020121  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rwalk tag 0 (4431b 1756b446 '') 
I1213 10:51:59.020233  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tstat tag 0 fid 2
I1213 10:51:59.020273  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (4431b 1756b446 '') m 644 at 0 mt 1765623116 l 24 t 0 d 0 ext )
I1213 10:51:59.020411  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tstat tag 0 fid 2
I1213 10:51:59.020444  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (4431b 1756b446 '') m 644 at 0 mt 1765623116 l 24 t 0 d 0 ext )
I1213 10:51:59.020559  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tclunk tag 0 fid 2
I1213 10:51:59.020581  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rclunk tag 0
I1213 10:51:59.020712  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Twalk tag 0 fid 0 newfid 2 0:'test-1765623116872019606' 
I1213 10:51:59.020740  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rwalk tag 0 (4431d 1756b446 '') 
I1213 10:51:59.020850  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tstat tag 0 fid 2
I1213 10:51:59.020880  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rstat tag 0 st ('test-1765623116872019606' 'jenkins' 'jenkins' '' q (4431d 1756b446 '') m 644 at 0 mt 1765623116 l 24 t 0 d 0 ext )
I1213 10:51:59.021025  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tstat tag 0 fid 2
I1213 10:51:59.021057  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rstat tag 0 st ('test-1765623116872019606' 'jenkins' 'jenkins' '' q (4431d 1756b446 '') m 644 at 0 mt 1765623116 l 24 t 0 d 0 ext )
I1213 10:51:59.021170  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tclunk tag 0 fid 2
I1213 10:51:59.021192  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rclunk tag 0
I1213 10:51:59.021329  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1213 10:51:59.021359  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rwalk tag 0 (4431c 1756b446 '') 
I1213 10:51:59.021467  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tstat tag 0 fid 2
I1213 10:51:59.021494  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (4431c 1756b446 '') m 644 at 0 mt 1765623116 l 24 t 0 d 0 ext )
I1213 10:51:59.021622  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tstat tag 0 fid 2
I1213 10:51:59.021666  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (4431c 1756b446 '') m 644 at 0 mt 1765623116 l 24 t 0 d 0 ext )
I1213 10:51:59.021784  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tclunk tag 0 fid 2
I1213 10:51:59.021807  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rclunk tag 0
I1213 10:51:59.021922  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tread tag 0 fid 1 offset 258 count 262120
I1213 10:51:59.021956  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rread tag 0 count 0
I1213 10:51:59.022107  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tclunk tag 0 fid 1
I1213 10:51:59.022144  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rclunk tag 0
I1213 10:51:59.023374  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1213 10:51:59.023448  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rerror tag 0 ename 'file not found' ecode 0
I1213 10:51:59.299901  374878 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35124 Tclunk tag 0 fid 0
I1213 10:51:59.299952  374878 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35124 Rclunk tag 0
I1213 10:51:59.301121  374878 main.go:127] stdlog: ufs.go:147 disconnected
I1213 10:51:59.323914  374878 out.go:179] * Unmounting /mount-9p ...
I1213 10:51:59.326866  374878 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1213 10:51:59.333788  374878 mount.go:180] unmount for /mount-9p ran successfully
I1213 10:51:59.333912  374878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/.mount-process: {Name:mk52912dd8d19beb03f9b9799cdd3aefed791565 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 10:51:59.337081  374878 out.go:203] 
W1213 10:51:59.339899  374878 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1213 10:51:59.342776  374878 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.55s)

                                                
                                    
x
+
TestKubernetesUpgrade (800.64s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-415704 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1213 11:19:47.365017  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:20:12.241797  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-415704 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.528329771s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-415704
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-415704: (1.368282268s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-415704 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-415704 status --format={{.Host}}: exit status 7 (100.408139ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-415704 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-415704 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (12m35.573147055s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-415704] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-415704" primary control-plane node in "kubernetes-upgrade-415704" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:20:23.260510  507417 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:20:23.260669  507417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:20:23.260676  507417 out.go:374] Setting ErrFile to fd 2...
	I1213 11:20:23.260681  507417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:20:23.260938  507417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 11:20:23.261289  507417 out.go:368] Setting JSON to false
	I1213 11:20:23.262272  507417 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":14576,"bootTime":1765610247,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 11:20:23.262333  507417 start.go:143] virtualization:  
	I1213 11:20:23.267346  507417 out.go:179] * [kubernetes-upgrade-415704] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:20:23.270410  507417 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:20:23.270506  507417 notify.go:221] Checking for updates...
	I1213 11:20:23.277701  507417 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:20:23.280499  507417 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:20:23.283289  507417 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 11:20:23.286067  507417 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:20:23.288962  507417 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:20:23.292269  507417 config.go:182] Loaded profile config "kubernetes-upgrade-415704": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1213 11:20:23.292852  507417 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:20:23.328770  507417 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:20:23.328969  507417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:20:23.420009  507417 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:20:23.410224815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:20:23.420133  507417 docker.go:319] overlay module found
	I1213 11:20:23.423167  507417 out.go:179] * Using the docker driver based on existing profile
	I1213 11:20:23.425959  507417 start.go:309] selected driver: docker
	I1213 11:20:23.425974  507417 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-415704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-415704 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:20:23.426079  507417 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:20:23.427025  507417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:20:23.518570  507417 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:20:23.50880111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:20:23.518955  507417 cni.go:84] Creating CNI manager for ""
	I1213 11:20:23.519016  507417 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:20:23.519057  507417 start.go:353] cluster config:
	{Name:kubernetes-upgrade-415704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-415704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:20:23.522098  507417 out.go:179] * Starting "kubernetes-upgrade-415704" primary control-plane node in "kubernetes-upgrade-415704" cluster
	I1213 11:20:23.524808  507417 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 11:20:23.527764  507417 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:20:23.530741  507417 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:20:23.530791  507417 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 11:20:23.530801  507417 cache.go:65] Caching tarball of preloaded images
	I1213 11:20:23.530897  507417 preload.go:238] Found /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 11:20:23.530908  507417 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 11:20:23.531015  507417 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/kubernetes-upgrade-415704/config.json ...
	I1213 11:20:23.531240  507417 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:20:23.552573  507417 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:20:23.552593  507417 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:20:23.552608  507417 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:20:23.552638  507417 start.go:360] acquireMachinesLock for kubernetes-upgrade-415704: {Name:mk6c11f7999ef1e09f7b812b32e75505a78eb1e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:20:23.552695  507417 start.go:364] duration metric: took 39.393µs to acquireMachinesLock for "kubernetes-upgrade-415704"
	I1213 11:20:23.552714  507417 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:20:23.552720  507417 fix.go:54] fixHost starting: 
	I1213 11:20:23.552990  507417 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-415704 --format={{.State.Status}}
	I1213 11:20:23.588566  507417 fix.go:112] recreateIfNeeded on kubernetes-upgrade-415704: state=Stopped err=<nil>
	W1213 11:20:23.588593  507417 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:20:23.591859  507417 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-415704" ...
	I1213 11:20:23.591958  507417 cli_runner.go:164] Run: docker start kubernetes-upgrade-415704
	I1213 11:20:23.939150  507417 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-415704 --format={{.State.Status}}
	I1213 11:20:23.971260  507417 kic.go:430] container "kubernetes-upgrade-415704" state is running.
	I1213 11:20:23.971703  507417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-415704
	I1213 11:20:23.999082  507417 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/kubernetes-upgrade-415704/config.json ...
	I1213 11:20:23.999380  507417 machine.go:94] provisionDockerMachine start ...
	I1213 11:20:23.999487  507417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-415704
	I1213 11:20:24.034380  507417 main.go:143] libmachine: Using SSH client type: native
	I1213 11:20:24.034901  507417 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33350 <nil> <nil>}
	I1213 11:20:24.034924  507417 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:20:24.035653  507417 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 11:20:27.247384  507417 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-415704
	
	I1213 11:20:27.247456  507417 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-415704"
	I1213 11:20:27.247566  507417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-415704
	I1213 11:20:27.285796  507417 main.go:143] libmachine: Using SSH client type: native
	I1213 11:20:27.286120  507417 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33350 <nil> <nil>}
	I1213 11:20:27.286132  507417 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-415704 && echo "kubernetes-upgrade-415704" | sudo tee /etc/hostname
	I1213 11:20:27.464869  507417 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-415704
	
	I1213 11:20:27.464952  507417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-415704
	I1213 11:20:27.482925  507417 main.go:143] libmachine: Using SSH client type: native
	I1213 11:20:27.483353  507417 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33350 <nil> <nil>}
	I1213 11:20:27.483380  507417 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-415704' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-415704/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-415704' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:20:27.634943  507417 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:20:27.634973  507417 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 11:20:27.635002  507417 ubuntu.go:190] setting up certificates
	I1213 11:20:27.635011  507417 provision.go:84] configureAuth start
	I1213 11:20:27.635068  507417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-415704
	I1213 11:20:27.652750  507417 provision.go:143] copyHostCerts
	I1213 11:20:27.652833  507417 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 11:20:27.652847  507417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 11:20:27.652931  507417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 11:20:27.653027  507417 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 11:20:27.653036  507417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 11:20:27.653063  507417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 11:20:27.653122  507417 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 11:20:27.653130  507417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 11:20:27.653154  507417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 11:20:27.653203  507417 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-415704 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-415704 localhost minikube]
	I1213 11:20:27.781298  507417 provision.go:177] copyRemoteCerts
	I1213 11:20:27.781373  507417 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:20:27.781413  507417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-415704
	I1213 11:20:27.801887  507417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33350 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/kubernetes-upgrade-415704/id_rsa Username:docker}
	I1213 11:20:27.906538  507417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:20:27.926427  507417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1213 11:20:27.945227  507417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 11:20:27.967859  507417 provision.go:87] duration metric: took 332.824021ms to configureAuth
	I1213 11:20:27.967888  507417 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:20:27.968098  507417 config.go:182] Loaded profile config "kubernetes-upgrade-415704": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:20:27.968108  507417 machine.go:97] duration metric: took 3.968719689s to provisionDockerMachine
	I1213 11:20:27.968116  507417 start.go:293] postStartSetup for "kubernetes-upgrade-415704" (driver="docker")
	I1213 11:20:27.968128  507417 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:20:27.968181  507417 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:20:27.968233  507417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-415704
	I1213 11:20:27.993553  507417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33350 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/kubernetes-upgrade-415704/id_rsa Username:docker}
	I1213 11:20:28.104842  507417 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:20:28.108623  507417 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:20:28.108649  507417 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:20:28.108660  507417 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 11:20:28.108713  507417 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 11:20:28.108792  507417 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 11:20:28.108900  507417 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:20:28.118729  507417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:20:28.149641  507417 start.go:296] duration metric: took 181.508925ms for postStartSetup
	I1213 11:20:28.149736  507417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:20:28.149778  507417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-415704
	I1213 11:20:28.169466  507417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33350 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/kubernetes-upgrade-415704/id_rsa Username:docker}
	I1213 11:20:28.272995  507417 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:20:28.277967  507417 fix.go:56] duration metric: took 4.725240412s for fixHost
	I1213 11:20:28.278016  507417 start.go:83] releasing machines lock for "kubernetes-upgrade-415704", held for 4.725287452s
	I1213 11:20:28.278090  507417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-415704
	I1213 11:20:28.301522  507417 ssh_runner.go:195] Run: cat /version.json
	I1213 11:20:28.301573  507417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-415704
	I1213 11:20:28.301885  507417 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:20:28.301942  507417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-415704
	I1213 11:20:28.341247  507417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33350 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/kubernetes-upgrade-415704/id_rsa Username:docker}
	I1213 11:20:28.345872  507417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33350 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/kubernetes-upgrade-415704/id_rsa Username:docker}
	I1213 11:20:28.563911  507417 ssh_runner.go:195] Run: systemctl --version
	I1213 11:20:28.570341  507417 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:20:28.575502  507417 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:20:28.575622  507417 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:20:28.585846  507417 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 11:20:28.585916  507417 start.go:496] detecting cgroup driver to use...
	I1213 11:20:28.585967  507417 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:20:28.586057  507417 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:20:28.604163  507417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:20:28.626909  507417 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:20:28.627086  507417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:20:28.649520  507417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:20:28.666704  507417 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:20:28.798591  507417 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:20:28.927419  507417 docker.go:234] disabling docker service ...
	I1213 11:20:28.927507  507417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:20:28.952544  507417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:20:28.966844  507417 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:20:29.112380  507417 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:20:29.276855  507417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:20:29.293976  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:20:29.317153  507417 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 11:20:29.336541  507417 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:20:29.355154  507417 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:20:29.355269  507417 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:20:29.375302  507417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:20:29.387438  507417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:20:29.396784  507417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:20:29.408915  507417 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:20:29.419752  507417 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:20:29.435372  507417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:20:29.449669  507417 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:20:29.459626  507417 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:20:29.469646  507417 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:20:29.481609  507417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:20:29.657210  507417 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:20:29.900383  507417 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 11:20:29.900468  507417 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 11:20:29.905194  507417 start.go:564] Will wait 60s for crictl version
	I1213 11:20:29.905271  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:20:29.909456  507417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:20:29.946848  507417 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 11:20:29.946925  507417 ssh_runner.go:195] Run: containerd --version
	I1213 11:20:29.977472  507417 ssh_runner.go:195] Run: containerd --version
	I1213 11:20:30.012097  507417 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 11:20:30.016182  507417 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-415704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:20:30.049177  507417 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 11:20:30.057980  507417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:20:30.072296  507417 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-415704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-415704 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:20:30.072442  507417 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:20:30.072691  507417 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:20:30.108163  507417 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1213 11:20:30.108245  507417 ssh_runner.go:195] Run: which lz4
	I1213 11:20:30.113347  507417 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1213 11:20:30.125026  507417 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 11:20:30.125071  507417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (305624510 bytes)
	I1213 11:20:33.693940  507417 containerd.go:563] duration metric: took 3.580637013s to copy over tarball
	I1213 11:20:33.694068  507417 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 11:20:36.060074  507417 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.365980561s)
	I1213 11:20:36.060148  507417 kubeadm.go:910] preload failed, will try to load cached images: extracting tarball: 
	** stderr ** 
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	
	** /stderr **: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: Process exited with status 2
	stdout:
	
	stderr:
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	I1213 11:20:36.060249  507417 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:20:36.212463  507417 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1213 11:20:36.212488  507417 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1213 11:20:36.212543  507417 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:20:36.212751  507417 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:20:36.212856  507417 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:20:36.212958  507417 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:20:36.213051  507417 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:20:36.213142  507417 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1213 11:20:36.213225  507417 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1213 11:20:36.213328  507417 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:20:36.214828  507417 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:20:36.215204  507417 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:20:36.215374  507417 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1213 11:20:36.215526  507417 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:20:36.215660  507417 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:20:36.215783  507417 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1213 11:20:36.215908  507417 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:20:36.217123  507417 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:20:36.564067  507417 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1213 11:20:36.564152  507417 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:20:36.579916  507417 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1213 11:20:36.579994  507417 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:20:36.581497  507417 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1213 11:20:36.581562  507417 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:20:36.590423  507417 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1213 11:20:36.590518  507417 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1213 11:20:36.600628  507417 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1213 11:20:36.600707  507417 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:20:36.603122  507417 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1213 11:20:36.603201  507417 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1213 11:20:36.647756  507417 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1213 11:20:36.647848  507417 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:20:36.740602  507417 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1213 11:20:36.740700  507417 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:20:36.740779  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:20:36.753522  507417 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1213 11:20:36.753609  507417 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:20:36.753688  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:20:36.753803  507417 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1213 11:20:36.753843  507417 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:20:36.753884  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:20:36.753975  507417 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1213 11:20:36.754024  507417 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1213 11:20:36.754062  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:20:36.754164  507417 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1213 11:20:36.754214  507417 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:20:36.754251  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:20:36.754342  507417 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1213 11:20:36.754379  507417 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1213 11:20:36.754415  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:20:36.754501  507417 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1213 11:20:36.754542  507417 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:20:36.754582  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:20:36.754673  507417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:20:36.799315  507417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:20:36.799437  507417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:20:36.799516  507417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:20:36.799558  507417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 11:20:36.799634  507417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 11:20:36.799606  507417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:20:36.799720  507417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:20:36.969361  507417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:20:36.969460  507417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:20:36.969521  507417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:20:36.969581  507417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:20:36.969630  507417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 11:20:36.969684  507417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 11:20:36.969735  507417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:20:37.155338  507417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:20:37.155426  507417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:20:37.155479  507417 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1213 11:20:37.155550  507417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:20:37.155606  507417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:20:37.155664  507417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 11:20:37.155714  507417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 11:20:37.332979  507417 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1213 11:20:37.333112  507417 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1213 11:20:37.334242  507417 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1213 11:20:37.334333  507417 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1213 11:20:37.334396  507417 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1213 11:20:37.334435  507417 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1213 11:20:37.334487  507417 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1213 11:20:37.334542  507417 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1213 11:20:37.343388  507417 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1213 11:20:37.343426  507417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1213 11:20:37.343468  507417 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1213 11:20:37.343484  507417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1213 11:20:37.385531  507417 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1213 11:20:37.385642  507417 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	W1213 11:20:37.438222  507417 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1213 11:20:37.438443  507417 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1213 11:20:37.438522  507417 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:20:37.553187  507417 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1213 11:20:37.553230  507417 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:20:37.553279  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:20:37.578506  507417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:20:37.712026  507417 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1213 11:20:37.712090  507417 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1213 11:20:37.827940  507417 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1213 11:20:37.828048  507417 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1213 11:20:39.792265  507417 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (2.080152901s)
	I1213 11:20:39.792295  507417 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1213 11:20:39.792329  507417 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.964266492s)
	I1213 11:20:39.792343  507417 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1213 11:20:39.792371  507417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1213 11:20:39.936918  507417 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1213 11:20:39.937038  507417 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1213 11:20:40.623466  507417 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1213 11:20:40.623517  507417 cache_images.go:94] duration metric: took 4.411016643s to LoadCachedImages
	W1213 11:20:40.623577  507417 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0: no such file or directory
	I1213 11:20:40.623586  507417 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 11:20:40.623676  507417 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-415704 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-415704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:20:40.623736  507417 ssh_runner.go:195] Run: sudo crictl info
	I1213 11:20:40.649546  507417 cni.go:84] Creating CNI manager for ""
	I1213 11:20:40.649577  507417 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:20:40.649601  507417 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 11:20:40.649624  507417 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-415704 NodeName:kubernetes-upgrade-415704 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/
certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:20:40.649733  507417 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-415704"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:20:40.649801  507417 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:20:40.659412  507417 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:20:40.659489  507417 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:20:40.667653  507417 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (336 bytes)
	I1213 11:20:40.680719  507417 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:20:40.694640  507417 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I1213 11:20:40.709279  507417 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:20:40.713625  507417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:20:40.723746  507417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:20:40.894634  507417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:20:40.912073  507417 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/kubernetes-upgrade-415704 for IP: 192.168.76.2
	I1213 11:20:40.912147  507417 certs.go:195] generating shared ca certs ...
	I1213 11:20:40.912188  507417 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:20:40.912394  507417 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 11:20:40.912488  507417 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 11:20:40.912548  507417 certs.go:257] generating profile certs ...
	I1213 11:20:40.912668  507417 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/kubernetes-upgrade-415704/client.key
	I1213 11:20:40.912782  507417 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/kubernetes-upgrade-415704/apiserver.key.a6765115
	I1213 11:20:40.912867  507417 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/kubernetes-upgrade-415704/proxy-client.key
	I1213 11:20:40.913017  507417 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 11:20:40.913079  507417 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 11:20:40.913116  507417 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:20:40.913173  507417 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:20:40.913234  507417 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:20:40.913295  507417 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 11:20:40.913373  507417 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:20:40.914180  507417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:20:40.972506  507417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:20:41.007206  507417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:20:41.027803  507417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:20:41.047248  507417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/kubernetes-upgrade-415704/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1213 11:20:41.065648  507417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/kubernetes-upgrade-415704/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 11:20:41.084411  507417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/kubernetes-upgrade-415704/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:20:41.104085  507417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/kubernetes-upgrade-415704/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:20:41.127759  507417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 11:20:41.147937  507417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:20:41.167170  507417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 11:20:41.187351  507417 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:20:41.201406  507417 ssh_runner.go:195] Run: openssl version
	I1213 11:20:41.211212  507417 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:20:41.223806  507417 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:20:41.236613  507417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:20:41.241542  507417 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:20:41.241653  507417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:20:41.285215  507417 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:20:41.293844  507417 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 11:20:41.302556  507417 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 11:20:41.312967  507417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 11:20:41.317571  507417 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 11:20:41.317681  507417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 11:20:41.360019  507417 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:20:41.368712  507417 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 11:20:41.377218  507417 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 11:20:41.386211  507417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 11:20:41.390349  507417 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 11:20:41.390459  507417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 11:20:41.437157  507417 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:20:41.445325  507417 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:20:41.449632  507417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:20:41.494273  507417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:20:41.536745  507417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:20:41.579191  507417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:20:41.624473  507417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:20:41.681171  507417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:20:41.753482  507417 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-415704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-415704 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:20:41.753612  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 11:20:41.753709  507417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:20:41.810066  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:20:41.810140  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:20:41.810159  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:20:41.810174  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:20:41.810206  507417 cri.go:89] found id: ""
	I1213 11:20:41.810276  507417 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1213 11:20:41.830413  507417 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:20:41Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1213 11:20:41.830563  507417 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:20:41.839115  507417 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 11:20:41.839189  507417 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 11:20:41.839295  507417 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 11:20:41.847366  507417 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:20:41.847843  507417 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-415704" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:20:41.848003  507417 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-307042/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-415704" cluster setting kubeconfig missing "kubernetes-upgrade-415704" context setting]
	I1213 11:20:41.848325  507417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:20:41.848916  507417 kapi.go:59] client config for kubernetes-upgrade-415704: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/profiles/kubernetes-upgrade-415704/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/profiles/kubernetes-upgrade-415704/client.key", CAFile:"/home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 11:20:41.849512  507417 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 11:20:41.849736  507417 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 11:20:41.849765  507417 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 11:20:41.849783  507417 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 11:20:41.849804  507417 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 11:20:41.850134  507417 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 11:20:41.860924  507417 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 11:20:02.470819480 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 11:20:40.703140953 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///run/containerd/containerd.sock
	   name: "kubernetes-upgrade-415704"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1213 11:20:41.860990  507417 kubeadm.go:1161] stopping kube-system containers ...
	I1213 11:20:41.861016  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1213 11:20:41.861105  507417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:20:41.921165  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:20:41.921231  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:20:41.921250  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:20:41.921268  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:20:41.921286  507417 cri.go:89] found id: ""
	I1213 11:20:41.921321  507417 cri.go:252] Stopping containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:20:41.921417  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:20:41.925586  507417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee
	I1213 11:20:41.962168  507417 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 11:20:41.979211  507417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:20:41.987833  507417 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Dec 13 11:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Dec 13 11:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 13 11:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec 13 11:20 /etc/kubernetes/scheduler.conf
	
	I1213 11:20:41.987979  507417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:20:41.997223  507417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:20:42.008243  507417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:20:42.019117  507417 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:20:42.019244  507417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:20:42.028015  507417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:20:42.037470  507417 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:20:42.037612  507417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:20:42.045697  507417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:20:42.054649  507417 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 11:20:42.135466  507417 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 11:20:43.517051  507417 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.381482637s)
	I1213 11:20:43.517201  507417 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 11:20:43.745186  507417 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 11:20:43.817773  507417 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 11:20:43.868612  507417 api_server.go:52] waiting for apiserver process to appear ...
	I1213 11:20:43.868764  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:44.369578  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:44.869441  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:45.370152  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:45.868831  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:46.369762  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:46.868871  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:47.369029  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:47.869783  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:48.369708  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:48.868908  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:49.369657  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:49.868948  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:50.369068  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:50.869146  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:51.369723  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:51.868829  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:52.369510  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:52.869478  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:53.369121  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:53.869308  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:54.369339  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:54.868930  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:55.369462  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:55.868829  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:56.368876  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:56.869670  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:57.369484  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:57.868855  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:58.369057  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:58.869518  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:59.369603  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:20:59.869365  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:00.369775  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:00.869822  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:01.369598  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:01.869879  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:02.369592  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:02.868852  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:03.369448  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:03.869797  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:04.369526  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:04.868911  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:05.369603  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:05.868834  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:06.369838  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:06.869300  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:07.368881  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:07.869381  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:08.369316  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:08.869612  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:09.368856  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:09.868872  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:10.368864  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:10.869797  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:11.368906  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:11.869736  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:12.369526  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:12.868798  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:13.368934  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:13.868842  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:14.369617  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:14.869782  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:15.368862  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:15.868880  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:16.369366  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:16.868866  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:17.368906  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:17.869108  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:18.368850  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:18.869645  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:19.369155  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:19.869615  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:20.369204  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:20.869655  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:21.369619  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:21.869011  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:22.369421  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:22.875117  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:23.369017  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:23.868913  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:24.368867  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:24.869004  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:25.368867  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:25.868855  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:26.368898  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:26.869436  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:27.368917  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:27.869620  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:28.369590  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:28.869715  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:29.369782  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:29.869475  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:30.369396  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:30.869528  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:31.368920  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:31.869828  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:32.369825  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:32.868945  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:33.369603  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:33.869725  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:34.369733  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:34.869632  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:35.368921  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:35.869729  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:36.369822  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:36.869835  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:37.368889  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:37.869658  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:38.369253  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:38.869673  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:39.368936  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:39.869478  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:40.369577  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:40.869317  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:41.369612  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:41.869595  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:42.369475  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:42.869721  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:43.368946  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:43.868852  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:21:43.868974  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:21:43.894496  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:21:43.894519  507417 cri.go:89] found id: ""
	I1213 11:21:43.894527  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:21:43.894584  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:21:43.898492  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:21:43.898568  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:21:43.923135  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:21:43.923155  507417 cri.go:89] found id: ""
	I1213 11:21:43.923163  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:21:43.923221  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:21:43.926959  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:21:43.927026  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:21:43.951450  507417 cri.go:89] found id: ""
	I1213 11:21:43.951475  507417 logs.go:282] 0 containers: []
	W1213 11:21:43.951484  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:21:43.951491  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:21:43.951548  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:21:43.976250  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:21:43.976277  507417 cri.go:89] found id: ""
	I1213 11:21:43.976286  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:21:43.976343  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:21:43.980938  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:21:43.981055  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:21:44.011224  507417 cri.go:89] found id: ""
	I1213 11:21:44.011252  507417 logs.go:282] 0 containers: []
	W1213 11:21:44.011262  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:21:44.011268  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:21:44.011332  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:21:44.036856  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:21:44.036878  507417 cri.go:89] found id: ""
	I1213 11:21:44.036886  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:21:44.036944  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:21:44.040767  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:21:44.040842  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:21:44.066176  507417 cri.go:89] found id: ""
	I1213 11:21:44.066202  507417 logs.go:282] 0 containers: []
	W1213 11:21:44.066211  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:21:44.066218  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:21:44.066287  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:21:44.092085  507417 cri.go:89] found id: ""
	I1213 11:21:44.092107  507417 logs.go:282] 0 containers: []
	W1213 11:21:44.092116  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:21:44.092130  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:21:44.092142  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:21:44.152672  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:21:44.152708  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:21:44.196882  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:21:44.196956  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:21:44.235472  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:21:44.235511  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:21:44.267452  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:21:44.267483  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:21:44.305635  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:21:44.305710  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:21:44.324390  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:21:44.324421  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:21:44.393495  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:21:44.393518  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:21:44.393533  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:21:44.428519  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:21:44.428571  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:21:46.958853  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:46.969237  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:21:46.969315  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:21:46.995362  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:21:46.995384  507417 cri.go:89] found id: ""
	I1213 11:21:46.995398  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:21:46.995455  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:21:47.000360  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:21:47.000497  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:21:47.027066  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:21:47.027135  507417 cri.go:89] found id: ""
	I1213 11:21:47.027156  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:21:47.027224  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:21:47.030985  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:21:47.031067  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:21:47.056124  507417 cri.go:89] found id: ""
	I1213 11:21:47.056160  507417 logs.go:282] 0 containers: []
	W1213 11:21:47.056170  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:21:47.056176  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:21:47.056235  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:21:47.085428  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:21:47.085449  507417 cri.go:89] found id: ""
	I1213 11:21:47.085457  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:21:47.085512  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:21:47.089106  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:21:47.089182  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:21:47.114414  507417 cri.go:89] found id: ""
	I1213 11:21:47.114441  507417 logs.go:282] 0 containers: []
	W1213 11:21:47.114451  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:21:47.114457  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:21:47.114526  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:21:47.140257  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:21:47.140279  507417 cri.go:89] found id: ""
	I1213 11:21:47.140289  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:21:47.140361  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:21:47.144165  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:21:47.144237  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:21:47.181151  507417 cri.go:89] found id: ""
	I1213 11:21:47.181177  507417 logs.go:282] 0 containers: []
	W1213 11:21:47.181187  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:21:47.181193  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:21:47.181253  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:21:47.213101  507417 cri.go:89] found id: ""
	I1213 11:21:47.213128  507417 logs.go:282] 0 containers: []
	W1213 11:21:47.213137  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:21:47.213151  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:21:47.213163  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:21:47.275257  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:21:47.275292  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:21:47.346966  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:21:47.346989  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:21:47.347008  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:21:47.378368  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:21:47.378397  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:21:47.395197  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:21:47.395225  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:21:47.431522  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:21:47.431554  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:21:47.470629  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:21:47.470667  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:21:47.502939  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:21:47.502969  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:21:47.533045  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:21:47.533077  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:21:50.064584  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:50.075747  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:21:50.075821  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:21:50.102886  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:21:50.102907  507417 cri.go:89] found id: ""
	I1213 11:21:50.102915  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:21:50.102973  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:21:50.107048  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:21:50.107122  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:21:50.133416  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:21:50.133440  507417 cri.go:89] found id: ""
	I1213 11:21:50.133449  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:21:50.133506  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:21:50.137517  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:21:50.137603  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:21:50.164707  507417 cri.go:89] found id: ""
	I1213 11:21:50.164739  507417 logs.go:282] 0 containers: []
	W1213 11:21:50.164749  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:21:50.164756  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:21:50.164816  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:21:50.193005  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:21:50.193047  507417 cri.go:89] found id: ""
	I1213 11:21:50.193057  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:21:50.193122  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:21:50.198097  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:21:50.198178  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:21:50.225901  507417 cri.go:89] found id: ""
	I1213 11:21:50.225948  507417 logs.go:282] 0 containers: []
	W1213 11:21:50.225958  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:21:50.225964  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:21:50.226045  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:21:50.255690  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:21:50.255713  507417 cri.go:89] found id: ""
	I1213 11:21:50.255723  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:21:50.255783  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:21:50.259735  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:21:50.259835  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:21:50.289264  507417 cri.go:89] found id: ""
	I1213 11:21:50.289290  507417 logs.go:282] 0 containers: []
	W1213 11:21:50.289298  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:21:50.289305  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:21:50.289365  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:21:50.314936  507417 cri.go:89] found id: ""
	I1213 11:21:50.315004  507417 logs.go:282] 0 containers: []
	W1213 11:21:50.315026  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:21:50.315051  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:21:50.315081  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:21:50.372706  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:21:50.372742  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:21:50.402250  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:21:50.402279  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:21:50.419003  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:21:50.419036  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:21:50.486031  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:21:50.486052  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:21:50.486065  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:21:50.520437  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:21:50.520471  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:21:50.552498  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:21:50.552529  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:21:50.584545  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:21:50.584577  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:21:50.612783  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:21:50.612814  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:21:53.142846  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:53.152933  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:21:53.153004  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:21:53.190854  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:21:53.190880  507417 cri.go:89] found id: ""
	I1213 11:21:53.190888  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:21:53.190949  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:21:53.195088  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:21:53.195167  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:21:53.243921  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:21:53.243943  507417 cri.go:89] found id: ""
	I1213 11:21:53.243952  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:21:53.244007  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:21:53.252957  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:21:53.253027  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:21:53.285108  507417 cri.go:89] found id: ""
	I1213 11:21:53.285175  507417 logs.go:282] 0 containers: []
	W1213 11:21:53.285201  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:21:53.285220  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:21:53.285301  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:21:53.317657  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:21:53.317677  507417 cri.go:89] found id: ""
	I1213 11:21:53.317685  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:21:53.317770  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:21:53.321759  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:21:53.321874  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:21:53.347572  507417 cri.go:89] found id: ""
	I1213 11:21:53.347640  507417 logs.go:282] 0 containers: []
	W1213 11:21:53.347666  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:21:53.347685  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:21:53.347756  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:21:53.373868  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:21:53.373950  507417 cri.go:89] found id: ""
	I1213 11:21:53.373974  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:21:53.374049  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:21:53.377649  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:21:53.377720  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:21:53.403165  507417 cri.go:89] found id: ""
	I1213 11:21:53.403189  507417 logs.go:282] 0 containers: []
	W1213 11:21:53.403198  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:21:53.403205  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:21:53.403264  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:21:53.432292  507417 cri.go:89] found id: ""
	I1213 11:21:53.432317  507417 logs.go:282] 0 containers: []
	W1213 11:21:53.432327  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:21:53.432342  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:21:53.432353  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:21:53.494122  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:21:53.494159  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:21:53.564555  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:21:53.564579  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:21:53.564594  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:21:53.614922  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:21:53.614960  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:21:53.648762  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:21:53.648794  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:21:53.681203  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:21:53.681242  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:21:53.711963  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:21:53.711997  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:21:53.729287  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:21:53.729318  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:21:53.760753  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:21:53.760783  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:21:56.297915  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:56.308442  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:21:56.308519  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:21:56.334553  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:21:56.334578  507417 cri.go:89] found id: ""
	I1213 11:21:56.334587  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:21:56.334644  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:21:56.339037  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:21:56.339111  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:21:56.364878  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:21:56.364898  507417 cri.go:89] found id: ""
	I1213 11:21:56.364907  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:21:56.364962  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:21:56.368566  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:21:56.368638  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:21:56.397602  507417 cri.go:89] found id: ""
	I1213 11:21:56.397677  507417 logs.go:282] 0 containers: []
	W1213 11:21:56.397700  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:21:56.397723  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:21:56.397808  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:21:56.422851  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:21:56.422876  507417 cri.go:89] found id: ""
	I1213 11:21:56.422885  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:21:56.422940  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:21:56.426743  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:21:56.426816  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:21:56.450911  507417 cri.go:89] found id: ""
	I1213 11:21:56.450939  507417 logs.go:282] 0 containers: []
	W1213 11:21:56.450948  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:21:56.450954  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:21:56.451010  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:21:56.475565  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:21:56.475588  507417 cri.go:89] found id: ""
	I1213 11:21:56.475597  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:21:56.475662  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:21:56.479513  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:21:56.479591  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:21:56.505325  507417 cri.go:89] found id: ""
	I1213 11:21:56.505357  507417 logs.go:282] 0 containers: []
	W1213 11:21:56.505366  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:21:56.505373  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:21:56.505437  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:21:56.530795  507417 cri.go:89] found id: ""
	I1213 11:21:56.530817  507417 logs.go:282] 0 containers: []
	W1213 11:21:56.530826  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:21:56.530841  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:21:56.530856  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:21:56.562885  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:21:56.562923  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:21:56.592508  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:21:56.592535  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:21:56.621886  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:21:56.621923  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:21:56.685173  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:21:56.685204  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:21:56.701805  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:21:56.701835  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:21:56.769319  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:21:56.769341  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:21:56.769361  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:21:56.815000  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:21:56.815029  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:21:56.846935  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:21:56.846963  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:21:59.396449  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:21:59.406431  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:21:59.406543  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:21:59.435143  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:21:59.435166  507417 cri.go:89] found id: ""
	I1213 11:21:59.435175  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:21:59.435243  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:21:59.439154  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:21:59.439230  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:21:59.464771  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:21:59.464834  507417 cri.go:89] found id: ""
	I1213 11:21:59.464859  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:21:59.464928  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:21:59.468782  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:21:59.468875  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:21:59.498489  507417 cri.go:89] found id: ""
	I1213 11:21:59.498515  507417 logs.go:282] 0 containers: []
	W1213 11:21:59.498524  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:21:59.498530  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:21:59.498586  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:21:59.524368  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:21:59.524389  507417 cri.go:89] found id: ""
	I1213 11:21:59.524397  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:21:59.524456  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:21:59.528270  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:21:59.528351  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:21:59.553449  507417 cri.go:89] found id: ""
	I1213 11:21:59.553474  507417 logs.go:282] 0 containers: []
	W1213 11:21:59.553483  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:21:59.553490  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:21:59.553548  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:21:59.581304  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:21:59.581331  507417 cri.go:89] found id: ""
	I1213 11:21:59.581340  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:21:59.581392  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:21:59.585135  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:21:59.585231  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:21:59.612950  507417 cri.go:89] found id: ""
	I1213 11:21:59.612977  507417 logs.go:282] 0 containers: []
	W1213 11:21:59.612986  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:21:59.612993  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:21:59.613049  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:21:59.638662  507417 cri.go:89] found id: ""
	I1213 11:21:59.638707  507417 logs.go:282] 0 containers: []
	W1213 11:21:59.638717  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:21:59.638731  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:21:59.638747  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:21:59.670594  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:21:59.670622  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:21:59.699581  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:21:59.699615  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:21:59.727992  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:21:59.728020  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:21:59.764005  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:21:59.764035  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:21:59.802336  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:21:59.802411  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:21:59.834394  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:21:59.834432  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:21:59.894337  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:21:59.894373  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:21:59.911096  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:21:59.911175  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:21:59.993179  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:22:02.494306  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:22:02.505017  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:22:02.505090  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:22:02.533537  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:02.533566  507417 cri.go:89] found id: ""
	I1213 11:22:02.533577  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:22:02.533639  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:02.537667  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:22:02.537757  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:22:02.564165  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:02.564185  507417 cri.go:89] found id: ""
	I1213 11:22:02.564193  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:22:02.564249  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:02.568114  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:22:02.568189  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:22:02.593640  507417 cri.go:89] found id: ""
	I1213 11:22:02.593668  507417 logs.go:282] 0 containers: []
	W1213 11:22:02.593677  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:22:02.593683  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:22:02.593741  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:22:02.623400  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:02.623427  507417 cri.go:89] found id: ""
	I1213 11:22:02.623435  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:22:02.623492  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:02.627275  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:22:02.627346  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:22:02.652979  507417 cri.go:89] found id: ""
	I1213 11:22:02.653006  507417 logs.go:282] 0 containers: []
	W1213 11:22:02.653015  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:22:02.653022  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:22:02.653080  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:22:02.678766  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:02.678791  507417 cri.go:89] found id: ""
	I1213 11:22:02.678800  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:22:02.678860  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:02.682569  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:22:02.682640  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:22:02.707495  507417 cri.go:89] found id: ""
	I1213 11:22:02.707522  507417 logs.go:282] 0 containers: []
	W1213 11:22:02.707531  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:22:02.707538  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:22:02.707594  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:22:02.732455  507417 cri.go:89] found id: ""
	I1213 11:22:02.732480  507417 logs.go:282] 0 containers: []
	W1213 11:22:02.732489  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:22:02.732506  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:22:02.732517  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:22:02.792806  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:22:02.792842  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:22:02.809313  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:22:02.809350  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:02.841027  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:22:02.841061  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:02.872257  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:22:02.872289  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:22:02.901568  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:22:02.901607  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:22:02.934791  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:22:02.934874  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:22:03.007993  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:22:03.008064  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:22:03.008092  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:03.043828  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:22:03.043865  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:05.578112  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:22:05.589162  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:22:05.589236  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:22:05.616332  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:05.616360  507417 cri.go:89] found id: ""
	I1213 11:22:05.616369  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:22:05.616424  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:05.620277  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:22:05.620351  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:22:05.645772  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:05.645795  507417 cri.go:89] found id: ""
	I1213 11:22:05.645804  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:22:05.645864  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:05.649607  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:22:05.649676  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:22:05.674223  507417 cri.go:89] found id: ""
	I1213 11:22:05.674278  507417 logs.go:282] 0 containers: []
	W1213 11:22:05.674307  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:22:05.674324  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:22:05.674400  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:22:05.700446  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:05.700469  507417 cri.go:89] found id: ""
	I1213 11:22:05.700477  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:22:05.700555  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:05.704245  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:22:05.704331  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:22:05.729516  507417 cri.go:89] found id: ""
	I1213 11:22:05.729539  507417 logs.go:282] 0 containers: []
	W1213 11:22:05.729548  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:22:05.729555  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:22:05.729618  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:22:05.753561  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:05.753584  507417 cri.go:89] found id: ""
	I1213 11:22:05.753593  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:22:05.753647  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:05.757352  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:22:05.757424  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:22:05.789955  507417 cri.go:89] found id: ""
	I1213 11:22:05.789979  507417 logs.go:282] 0 containers: []
	W1213 11:22:05.789988  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:22:05.789994  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:22:05.790052  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:22:05.815505  507417 cri.go:89] found id: ""
	I1213 11:22:05.815535  507417 logs.go:282] 0 containers: []
	W1213 11:22:05.815544  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:22:05.815562  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:22:05.815578  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:22:05.831890  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:22:05.831921  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:22:05.901696  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:22:05.901717  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:22:05.901731  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:22:05.963709  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:22:05.963794  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:06.006904  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:22:06.006947  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:06.042705  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:22:06.042781  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:06.079354  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:22:06.079386  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:06.110554  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:22:06.110584  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:22:06.140822  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:22:06.140856  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:22:08.675621  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:22:08.686184  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:22:08.686258  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:22:08.711848  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:08.711920  507417 cri.go:89] found id: ""
	I1213 11:22:08.711943  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:22:08.712031  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:08.717221  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:22:08.717294  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:22:08.743609  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:08.743633  507417 cri.go:89] found id: ""
	I1213 11:22:08.743642  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:22:08.743703  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:08.747664  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:22:08.747737  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:22:08.773636  507417 cri.go:89] found id: ""
	I1213 11:22:08.773661  507417 logs.go:282] 0 containers: []
	W1213 11:22:08.773670  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:22:08.773677  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:22:08.773738  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:22:08.804306  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:08.804328  507417 cri.go:89] found id: ""
	I1213 11:22:08.804340  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:22:08.804397  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:08.808325  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:22:08.808415  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:22:08.833920  507417 cri.go:89] found id: ""
	I1213 11:22:08.833998  507417 logs.go:282] 0 containers: []
	W1213 11:22:08.834021  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:22:08.834041  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:22:08.834129  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:22:08.860373  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:08.860398  507417 cri.go:89] found id: ""
	I1213 11:22:08.860406  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:22:08.860462  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:08.864490  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:22:08.864566  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:22:08.890599  507417 cri.go:89] found id: ""
	I1213 11:22:08.890625  507417 logs.go:282] 0 containers: []
	W1213 11:22:08.890642  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:22:08.890649  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:22:08.890740  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:22:08.920778  507417 cri.go:89] found id: ""
	I1213 11:22:08.920805  507417 logs.go:282] 0 containers: []
	W1213 11:22:08.920815  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:22:08.920829  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:22:08.920848  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:22:08.938924  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:22:08.939010  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:22:09.008355  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:22:09.008431  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:22:09.008454  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:09.044852  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:22:09.044888  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:22:09.075074  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:22:09.075109  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:22:09.104606  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:22:09.104633  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:22:09.164510  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:22:09.164545  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:09.202055  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:22:09.202087  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:09.239374  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:22:09.239407  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:11.772008  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:22:11.785514  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:22:11.785691  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:22:11.833384  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:11.833481  507417 cri.go:89] found id: ""
	I1213 11:22:11.833513  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:22:11.833614  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:11.838625  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:22:11.838787  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:22:11.878177  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:11.878251  507417 cri.go:89] found id: ""
	I1213 11:22:11.878273  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:22:11.878361  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:11.882888  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:22:11.883042  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:22:11.926784  507417 cri.go:89] found id: ""
	I1213 11:22:11.926864  507417 logs.go:282] 0 containers: []
	W1213 11:22:11.926894  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:22:11.926929  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:22:11.927067  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:22:11.993262  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:11.993345  507417 cri.go:89] found id: ""
	I1213 11:22:11.993375  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:22:11.993482  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:11.998094  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:22:11.998321  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:22:12.061002  507417 cri.go:89] found id: ""
	I1213 11:22:12.061086  507417 logs.go:282] 0 containers: []
	W1213 11:22:12.061110  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:22:12.061129  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:22:12.061234  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:22:12.101335  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:12.101408  507417 cri.go:89] found id: ""
	I1213 11:22:12.101442  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:22:12.101552  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:12.106412  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:22:12.106534  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:22:12.145628  507417 cri.go:89] found id: ""
	I1213 11:22:12.145729  507417 logs.go:282] 0 containers: []
	W1213 11:22:12.145753  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:22:12.145799  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:22:12.145953  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:22:12.184457  507417 cri.go:89] found id: ""
	I1213 11:22:12.184542  507417 logs.go:282] 0 containers: []
	W1213 11:22:12.184567  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:22:12.184617  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:22:12.184658  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:22:12.203864  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:22:12.203951  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:22:12.287987  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:22:12.288051  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:22:12.288077  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:12.325878  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:22:12.325915  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:12.354838  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:22:12.354873  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:22:12.416575  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:22:12.416610  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:12.462454  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:22:12.462486  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:12.497229  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:22:12.497264  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:22:12.529285  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:22:12.529322  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:22:15.060548  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:22:15.072564  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:22:15.072675  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:22:15.100724  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:15.100765  507417 cri.go:89] found id: ""
	I1213 11:22:15.100776  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:22:15.100880  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:15.105620  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:22:15.105754  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:22:15.133580  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:15.133603  507417 cri.go:89] found id: ""
	I1213 11:22:15.133612  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:22:15.133676  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:15.137949  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:22:15.138024  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:22:15.164391  507417 cri.go:89] found id: ""
	I1213 11:22:15.164418  507417 logs.go:282] 0 containers: []
	W1213 11:22:15.164428  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:22:15.164435  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:22:15.164500  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:22:15.195131  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:15.195165  507417 cri.go:89] found id: ""
	I1213 11:22:15.195174  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:22:15.195255  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:15.199419  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:22:15.199517  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:22:15.225413  507417 cri.go:89] found id: ""
	I1213 11:22:15.225449  507417 logs.go:282] 0 containers: []
	W1213 11:22:15.225457  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:22:15.225479  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:22:15.225563  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:22:15.253546  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:15.253578  507417 cri.go:89] found id: ""
	I1213 11:22:15.253590  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:22:15.253673  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:15.257602  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:22:15.257719  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:22:15.283211  507417 cri.go:89] found id: ""
	I1213 11:22:15.283278  507417 logs.go:282] 0 containers: []
	W1213 11:22:15.283294  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:22:15.283303  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:22:15.283360  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:22:15.313242  507417 cri.go:89] found id: ""
	I1213 11:22:15.313267  507417 logs.go:282] 0 containers: []
	W1213 11:22:15.313277  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:22:15.313291  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:22:15.313302  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:22:15.342556  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:22:15.342589  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:22:15.371584  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:22:15.371611  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:22:15.442642  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:22:15.442663  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:22:15.442678  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:15.477407  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:22:15.477442  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:22:15.539144  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:22:15.539186  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:22:15.556336  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:22:15.556367  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:15.590797  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:22:15.590828  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:15.629249  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:22:15.629281  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:18.159970  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:22:18.170103  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:22:18.170175  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:22:18.195495  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:18.195518  507417 cri.go:89] found id: ""
	I1213 11:22:18.195527  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:22:18.195587  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:18.199441  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:22:18.199514  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:22:18.234619  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:18.234643  507417 cri.go:89] found id: ""
	I1213 11:22:18.234652  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:22:18.234730  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:18.238483  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:22:18.238559  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:22:18.265973  507417 cri.go:89] found id: ""
	I1213 11:22:18.266000  507417 logs.go:282] 0 containers: []
	W1213 11:22:18.266009  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:22:18.266018  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:22:18.266078  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:22:18.291993  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:18.292017  507417 cri.go:89] found id: ""
	I1213 11:22:18.292026  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:22:18.292083  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:18.296201  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:22:18.296299  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:22:18.325166  507417 cri.go:89] found id: ""
	I1213 11:22:18.325192  507417 logs.go:282] 0 containers: []
	W1213 11:22:18.325202  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:22:18.325208  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:22:18.325286  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:22:18.351186  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:18.351210  507417 cri.go:89] found id: ""
	I1213 11:22:18.351219  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:22:18.351313  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:18.355211  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:22:18.355310  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:22:18.381614  507417 cri.go:89] found id: ""
	I1213 11:22:18.381639  507417 logs.go:282] 0 containers: []
	W1213 11:22:18.381649  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:22:18.381656  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:22:18.381745  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:22:18.407021  507417 cri.go:89] found id: ""
	I1213 11:22:18.407045  507417 logs.go:282] 0 containers: []
	W1213 11:22:18.407054  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:22:18.407087  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:22:18.407102  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:22:18.465029  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:22:18.465064  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:18.499090  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:22:18.499131  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:18.532868  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:22:18.532896  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:22:18.562102  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:22:18.562133  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:22:18.579656  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:22:18.579683  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:22:18.649420  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:22:18.649440  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:22:18.649454  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:18.713996  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:22:18.714040  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:18.771235  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:22:18.771269  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:22:21.314407  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:22:21.327651  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:22:21.327721  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:22:21.360656  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:21.360683  507417 cri.go:89] found id: ""
	I1213 11:22:21.360692  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:22:21.360780  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:21.364756  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:22:21.364846  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:22:21.393318  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:21.393341  507417 cri.go:89] found id: ""
	I1213 11:22:21.393350  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:22:21.393406  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:21.397306  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:22:21.397383  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:22:21.423294  507417 cri.go:89] found id: ""
	I1213 11:22:21.423317  507417 logs.go:282] 0 containers: []
	W1213 11:22:21.423326  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:22:21.423332  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:22:21.423400  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:22:21.452813  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:21.452838  507417 cri.go:89] found id: ""
	I1213 11:22:21.452867  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:22:21.452925  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:21.456769  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:22:21.456841  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:22:21.481985  507417 cri.go:89] found id: ""
	I1213 11:22:21.482015  507417 logs.go:282] 0 containers: []
	W1213 11:22:21.482024  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:22:21.482031  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:22:21.482089  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:22:21.511343  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:21.511364  507417 cri.go:89] found id: ""
	I1213 11:22:21.511373  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:22:21.511463  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:21.515533  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:22:21.515621  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:22:21.540931  507417 cri.go:89] found id: ""
	I1213 11:22:21.540954  507417 logs.go:282] 0 containers: []
	W1213 11:22:21.540963  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:22:21.540970  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:22:21.541039  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:22:21.566712  507417 cri.go:89] found id: ""
	I1213 11:22:21.566791  507417 logs.go:282] 0 containers: []
	W1213 11:22:21.566814  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:22:21.566856  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:22:21.566883  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:22:21.583738  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:22:21.583773  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:22:21.647922  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:22:21.647942  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:22:21.647967  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:21.708723  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:22:21.708765  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:21.746277  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:22:21.746313  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:21.784507  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:22:21.784542  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:21.813609  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:22:21.813637  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:22:21.880590  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:22:21.880635  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:22:21.910960  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:22:21.910999  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:22:24.443922  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:22:24.455106  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:22:24.455172  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:22:24.485508  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:24.485526  507417 cri.go:89] found id: ""
	I1213 11:22:24.485535  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:22:24.485589  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:24.489468  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:22:24.489543  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:22:24.515490  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:24.515513  507417 cri.go:89] found id: ""
	I1213 11:22:24.515521  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:22:24.515577  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:24.519374  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:22:24.519448  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:22:24.544530  507417 cri.go:89] found id: ""
	I1213 11:22:24.544554  507417 logs.go:282] 0 containers: []
	W1213 11:22:24.544563  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:22:24.544569  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:22:24.544630  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:22:24.571272  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:24.571293  507417 cri.go:89] found id: ""
	I1213 11:22:24.571301  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:22:24.571361  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:24.576428  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:22:24.576507  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:22:24.600847  507417 cri.go:89] found id: ""
	I1213 11:22:24.600877  507417 logs.go:282] 0 containers: []
	W1213 11:22:24.600886  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:22:24.600892  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:22:24.600994  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:22:24.626315  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:24.626338  507417 cri.go:89] found id: ""
	I1213 11:22:24.626347  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:22:24.626403  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:24.630109  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:22:24.630182  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:22:24.656700  507417 cri.go:89] found id: ""
	I1213 11:22:24.656727  507417 logs.go:282] 0 containers: []
	W1213 11:22:24.656737  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:22:24.656743  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:22:24.656801  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:22:24.688766  507417 cri.go:89] found id: ""
	I1213 11:22:24.688793  507417 logs.go:282] 0 containers: []
	W1213 11:22:24.688802  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:22:24.688816  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:22:24.688828  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:22:24.758887  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:22:24.758910  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:22:24.758924  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:24.801109  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:22:24.801143  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:24.831960  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:22:24.831994  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:24.861320  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:22:24.861350  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:22:24.894521  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:22:24.894549  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:22:24.953927  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:22:24.953963  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:22:24.970876  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:22:24.970906  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:25.003717  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:22:25.003757  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:22:27.536950  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:22:27.547504  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:22:27.547578  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:22:27.573859  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:27.573953  507417 cri.go:89] found id: ""
	I1213 11:22:27.573975  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:22:27.574056  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:27.577815  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:22:27.577906  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:22:27.601605  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:27.601628  507417 cri.go:89] found id: ""
	I1213 11:22:27.601636  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:22:27.601692  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:27.605415  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:22:27.605498  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:22:27.630477  507417 cri.go:89] found id: ""
	I1213 11:22:27.630510  507417 logs.go:282] 0 containers: []
	W1213 11:22:27.630520  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:22:27.630526  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:22:27.630589  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:22:27.655662  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:27.655684  507417 cri.go:89] found id: ""
	I1213 11:22:27.655693  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:22:27.655764  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:27.660190  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:22:27.660267  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:22:27.690721  507417 cri.go:89] found id: ""
	I1213 11:22:27.690748  507417 logs.go:282] 0 containers: []
	W1213 11:22:27.690773  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:22:27.690780  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:22:27.690854  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:22:27.723894  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:27.723925  507417 cri.go:89] found id: ""
	I1213 11:22:27.723934  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:22:27.723992  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:27.728012  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:22:27.728094  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:22:27.752059  507417 cri.go:89] found id: ""
	I1213 11:22:27.752084  507417 logs.go:282] 0 containers: []
	W1213 11:22:27.752094  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:22:27.752101  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:22:27.752160  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:22:27.778213  507417 cri.go:89] found id: ""
	I1213 11:22:27.778240  507417 logs.go:282] 0 containers: []
	W1213 11:22:27.778250  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:22:27.778266  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:22:27.778278  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:22:27.839947  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:22:27.839980  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:22:27.858211  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:22:27.858240  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:22:27.928014  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:22:27.928038  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:22:27.928050  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:27.965998  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:22:27.966029  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:27.998496  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:22:27.998538  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:28.034998  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:22:28.035029  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:22:28.066194  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:22:28.066228  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:28.098381  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:22:28.098409  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:22:30.632246  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:22:30.642610  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:22:30.642678  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:22:30.672985  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:30.673004  507417 cri.go:89] found id: ""
	I1213 11:22:30.673012  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:22:30.673066  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:30.677477  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:22:30.677549  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:22:30.704324  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:30.704391  507417 cri.go:89] found id: ""
	I1213 11:22:30.704412  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:22:30.704499  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:30.708759  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:22:30.708837  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:22:30.735734  507417 cri.go:89] found id: ""
	I1213 11:22:30.735763  507417 logs.go:282] 0 containers: []
	W1213 11:22:30.735771  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:22:30.735778  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:22:30.735834  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:22:30.759868  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:30.759932  507417 cri.go:89] found id: ""
	I1213 11:22:30.759947  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:22:30.760014  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:30.763838  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:22:30.763909  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:22:30.795641  507417 cri.go:89] found id: ""
	I1213 11:22:30.795667  507417 logs.go:282] 0 containers: []
	W1213 11:22:30.795676  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:22:30.795682  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:22:30.795739  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:22:30.820992  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:30.821013  507417 cri.go:89] found id: ""
	I1213 11:22:30.821022  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:22:30.821097  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:30.824961  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:22:30.825033  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:22:30.854249  507417 cri.go:89] found id: ""
	I1213 11:22:30.854271  507417 logs.go:282] 0 containers: []
	W1213 11:22:30.854280  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:22:30.854286  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:22:30.854347  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:22:30.879695  507417 cri.go:89] found id: ""
	I1213 11:22:30.879720  507417 logs.go:282] 0 containers: []
	W1213 11:22:30.879729  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:22:30.879741  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:22:30.879758  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:22:30.908156  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:22:30.908185  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:22:30.924598  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:22:30.924625  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:22:30.987219  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:22:30.987240  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:22:30.987253  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:31.027622  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:22:31.027652  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:22:31.087133  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:22:31.087170  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:31.137838  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:22:31.137878  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:31.173366  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:22:31.173408  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:31.222526  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:22:31.222560  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:22:33.760422  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:22:33.771066  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:22:33.771141  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:22:33.798053  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:33.798079  507417 cri.go:89] found id: ""
	I1213 11:22:33.798088  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:22:33.798144  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:33.801905  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:22:33.801982  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:22:33.826599  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:33.826620  507417 cri.go:89] found id: ""
	I1213 11:22:33.826636  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:22:33.826743  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:33.830575  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:22:33.830649  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:22:33.859619  507417 cri.go:89] found id: ""
	I1213 11:22:33.859646  507417 logs.go:282] 0 containers: []
	W1213 11:22:33.859655  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:22:33.859662  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:22:33.859719  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:22:33.888656  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:33.888677  507417 cri.go:89] found id: ""
	I1213 11:22:33.888686  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:22:33.888741  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:33.892508  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:22:33.892594  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:22:33.916759  507417 cri.go:89] found id: ""
	I1213 11:22:33.916784  507417 logs.go:282] 0 containers: []
	W1213 11:22:33.916792  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:22:33.916799  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:22:33.916856  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:22:33.947476  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:33.947500  507417 cri.go:89] found id: ""
	I1213 11:22:33.947508  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:22:33.947564  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:33.951228  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:22:33.951297  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:22:33.980204  507417 cri.go:89] found id: ""
	I1213 11:22:33.980230  507417 logs.go:282] 0 containers: []
	W1213 11:22:33.980239  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:22:33.980246  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:22:33.980305  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:22:34.010384  507417 cri.go:89] found id: ""
	I1213 11:22:34.010410  507417 logs.go:282] 0 containers: []
	W1213 11:22:34.010419  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:22:34.010433  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:22:34.010449  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:22:34.028196  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:22:34.028227  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:22:34.101005  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:22:34.101078  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:22:34.101097  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:34.139643  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:22:34.139676  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:34.173016  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:22:34.173049  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:34.209300  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:22:34.209330  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:22:34.238525  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:22:34.238559  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:22:34.269757  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:22:34.269785  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:22:34.328794  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:22:34.328830  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:36.862818  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:22:36.874877  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:22:36.874958  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:22:36.907282  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:36.907307  507417 cri.go:89] found id: ""
	I1213 11:22:36.907316  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:22:36.907373  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:36.914004  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:22:36.914075  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:22:36.951090  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:36.951114  507417 cri.go:89] found id: ""
	I1213 11:22:36.951123  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:22:36.951178  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:36.955330  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:22:36.955419  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:22:36.988043  507417 cri.go:89] found id: ""
	I1213 11:22:36.988069  507417 logs.go:282] 0 containers: []
	W1213 11:22:36.988078  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:22:36.988084  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:22:36.988140  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:22:37.052971  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:37.052993  507417 cri.go:89] found id: ""
	I1213 11:22:37.053002  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:22:37.053056  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:37.057308  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:22:37.057382  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:22:37.087833  507417 cri.go:89] found id: ""
	I1213 11:22:37.087859  507417 logs.go:282] 0 containers: []
	W1213 11:22:37.087868  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:22:37.087874  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:22:37.087931  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:22:37.122460  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:37.122485  507417 cri.go:89] found id: ""
	I1213 11:22:37.122493  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:22:37.122551  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:37.126953  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:22:37.127034  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:22:37.157388  507417 cri.go:89] found id: ""
	I1213 11:22:37.157414  507417 logs.go:282] 0 containers: []
	W1213 11:22:37.157422  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:22:37.157429  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:22:37.157484  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:22:37.189178  507417 cri.go:89] found id: ""
	I1213 11:22:37.189205  507417 logs.go:282] 0 containers: []
	W1213 11:22:37.189214  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:22:37.189230  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:22:37.189242  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:37.233826  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:22:37.233856  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:22:37.264227  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:22:37.264262  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:22:37.299268  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:22:37.299298  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:22:37.360576  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:22:37.360614  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:37.398163  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:22:37.398195  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:37.436840  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:22:37.436876  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:22:37.455118  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:22:37.455156  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:22:37.525335  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:22:37.525357  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:22:37.525371  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:40.062852  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:22:40.079113  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:22:40.079192  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:22:40.125154  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:40.125177  507417 cri.go:89] found id: ""
	I1213 11:22:40.125185  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:22:40.125245  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:40.135113  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:22:40.135188  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:22:40.181380  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:40.181401  507417 cri.go:89] found id: ""
	I1213 11:22:40.181410  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:22:40.181467  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:40.186435  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:22:40.186513  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:22:40.224576  507417 cri.go:89] found id: ""
	I1213 11:22:40.224598  507417 logs.go:282] 0 containers: []
	W1213 11:22:40.224606  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:22:40.224612  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:22:40.224677  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:22:40.267258  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:40.267279  507417 cri.go:89] found id: ""
	I1213 11:22:40.267287  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:22:40.267345  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:40.271456  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:22:40.271522  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:22:40.304188  507417 cri.go:89] found id: ""
	I1213 11:22:40.304211  507417 logs.go:282] 0 containers: []
	W1213 11:22:40.304220  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:22:40.304227  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:22:40.304285  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:22:40.345053  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:40.345071  507417 cri.go:89] found id: ""
	I1213 11:22:40.345080  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:22:40.345174  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:40.349124  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:22:40.349205  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:22:40.388743  507417 cri.go:89] found id: ""
	I1213 11:22:40.388773  507417 logs.go:282] 0 containers: []
	W1213 11:22:40.388783  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:22:40.388789  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:22:40.388846  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:22:40.433882  507417 cri.go:89] found id: ""
	I1213 11:22:40.433904  507417 logs.go:282] 0 containers: []
	W1213 11:22:40.433912  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:22:40.433926  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:22:40.433938  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:40.514414  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:22:40.514569  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:22:40.534913  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:22:40.534994  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:22:40.634766  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:22:40.634784  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:22:40.634797  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:40.679048  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:22:40.679123  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:40.729493  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:22:40.729527  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:22:40.766665  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:22:40.766739  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:22:40.813458  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:22:40.813490  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:22:40.874832  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:22:40.874869  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:43.418464  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:22:43.436711  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:22:43.436842  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:22:43.508071  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:43.508150  507417 cri.go:89] found id: ""
	I1213 11:22:43.508173  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:22:43.508254  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:43.513014  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:22:43.513141  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:22:43.543744  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:43.543817  507417 cri.go:89] found id: ""
	I1213 11:22:43.543840  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:22:43.543924  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:43.550557  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:22:43.550858  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:22:43.588418  507417 cri.go:89] found id: ""
	I1213 11:22:43.588518  507417 logs.go:282] 0 containers: []
	W1213 11:22:43.588541  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:22:43.588562  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:22:43.588658  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:22:43.629568  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:43.629660  507417 cri.go:89] found id: ""
	I1213 11:22:43.629689  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:22:43.629795  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:43.635390  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:22:43.635518  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:22:43.671320  507417 cri.go:89] found id: ""
	I1213 11:22:43.671396  507417 logs.go:282] 0 containers: []
	W1213 11:22:43.671421  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:22:43.671442  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:22:43.671521  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:22:43.701999  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:43.702075  507417 cri.go:89] found id: ""
	I1213 11:22:43.702098  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:22:43.702186  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:43.706089  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:22:43.706206  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:22:43.753072  507417 cri.go:89] found id: ""
	I1213 11:22:43.753148  507417 logs.go:282] 0 containers: []
	W1213 11:22:43.753171  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:22:43.753194  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:22:43.753276  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:22:43.789331  507417 cri.go:89] found id: ""
	I1213 11:22:43.789425  507417 logs.go:282] 0 containers: []
	W1213 11:22:43.789449  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:22:43.789475  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:22:43.789520  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:22:43.820760  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:22:43.820789  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:22:43.926007  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:22:43.926031  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:22:43.926048  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:43.993572  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:22:43.993642  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:44.047905  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:22:44.048076  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:22:44.089672  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:22:44.089700  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:44.134520  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:22:44.134598  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:44.191986  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:22:44.192062  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:22:44.249792  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:22:44.249901  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:22:46.815915  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:22:46.827996  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:22:46.828092  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:22:46.854279  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:46.854307  507417 cri.go:89] found id: ""
	I1213 11:22:46.854316  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:22:46.854404  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:46.858479  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:22:46.858573  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:22:46.883887  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:46.883910  507417 cri.go:89] found id: ""
	I1213 11:22:46.883918  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:22:46.883971  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:46.887864  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:22:46.887940  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:22:46.921211  507417 cri.go:89] found id: ""
	I1213 11:22:46.921237  507417 logs.go:282] 0 containers: []
	W1213 11:22:46.921245  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:22:46.921252  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:22:46.921311  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:22:46.946241  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:46.946260  507417 cri.go:89] found id: ""
	I1213 11:22:46.946269  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:22:46.946331  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:46.950102  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:22:46.950184  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:22:46.982480  507417 cri.go:89] found id: ""
	I1213 11:22:46.982508  507417 logs.go:282] 0 containers: []
	W1213 11:22:46.982517  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:22:46.982524  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:22:46.982592  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:22:47.011847  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:47.011868  507417 cri.go:89] found id: ""
	I1213 11:22:47.011877  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:22:47.011934  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:47.018029  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:22:47.018104  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:22:47.047413  507417 cri.go:89] found id: ""
	I1213 11:22:47.047439  507417 logs.go:282] 0 containers: []
	W1213 11:22:47.047448  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:22:47.047468  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:22:47.047545  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:22:47.083813  507417 cri.go:89] found id: ""
	I1213 11:22:47.083831  507417 logs.go:282] 0 containers: []
	W1213 11:22:47.083839  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:22:47.083854  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:22:47.083871  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:22:47.147652  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:22:47.147704  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:22:47.168009  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:22:47.168206  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:47.270350  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:22:47.273579  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:22:47.374231  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:22:47.374263  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:22:47.374278  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:47.419120  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:22:47.419196  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:47.472155  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:22:47.472219  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:47.507773  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:22:47.507883  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:22:47.540542  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:22:47.540576  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:22:50.098623  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:22:50.109197  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:22:50.109271  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:22:50.135539  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:50.135559  507417 cri.go:89] found id: ""
	I1213 11:22:50.135567  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:22:50.135631  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:50.139525  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:22:50.139593  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:22:50.173271  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:50.173291  507417 cri.go:89] found id: ""
	I1213 11:22:50.173300  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:22:50.173356  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:50.178574  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:22:50.178644  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:22:50.214529  507417 cri.go:89] found id: ""
	I1213 11:22:50.214551  507417 logs.go:282] 0 containers: []
	W1213 11:22:50.214560  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:22:50.214566  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:22:50.214628  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:22:50.244287  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:50.244314  507417 cri.go:89] found id: ""
	I1213 11:22:50.244322  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:22:50.244396  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:50.248442  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:22:50.248516  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:22:50.274797  507417 cri.go:89] found id: ""
	I1213 11:22:50.274824  507417 logs.go:282] 0 containers: []
	W1213 11:22:50.274834  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:22:50.274840  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:22:50.274899  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:22:50.305282  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:50.305307  507417 cri.go:89] found id: ""
	I1213 11:22:50.305316  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:22:50.305373  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:50.309560  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:22:50.309632  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:22:50.334559  507417 cri.go:89] found id: ""
	I1213 11:22:50.334584  507417 logs.go:282] 0 containers: []
	W1213 11:22:50.334593  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:22:50.334600  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:22:50.334663  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:22:50.360456  507417 cri.go:89] found id: ""
	I1213 11:22:50.360479  507417 logs.go:282] 0 containers: []
	W1213 11:22:50.360488  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:22:50.360504  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:22:50.360515  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:22:50.419053  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:22:50.419089  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:22:50.435882  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:22:50.435913  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:22:50.500253  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:22:50.500275  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:22:50.500292  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:50.534837  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:22:50.534868  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:50.574185  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:22:50.574217  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:22:50.608193  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:22:50.608231  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:50.646820  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:22:50.646848  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:50.695369  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:22:50.695448  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:22:53.238173  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:22:53.248440  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:22:53.248512  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:22:53.275479  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:53.275505  507417 cri.go:89] found id: ""
	I1213 11:22:53.275514  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:22:53.275570  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:53.279507  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:22:53.279578  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:22:53.304455  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:53.304477  507417 cri.go:89] found id: ""
	I1213 11:22:53.304486  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:22:53.304543  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:53.308619  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:22:53.308690  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:22:53.333891  507417 cri.go:89] found id: ""
	I1213 11:22:53.333915  507417 logs.go:282] 0 containers: []
	W1213 11:22:53.333923  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:22:53.333930  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:22:53.333988  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:22:53.367735  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:53.367758  507417 cri.go:89] found id: ""
	I1213 11:22:53.367767  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:22:53.367841  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:53.371941  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:22:53.372033  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:22:53.400386  507417 cri.go:89] found id: ""
	I1213 11:22:53.400415  507417 logs.go:282] 0 containers: []
	W1213 11:22:53.400431  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:22:53.400438  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:22:53.400499  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:22:53.425054  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:53.425083  507417 cri.go:89] found id: ""
	I1213 11:22:53.425092  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:22:53.425149  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:53.429063  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:22:53.429134  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:22:53.454617  507417 cri.go:89] found id: ""
	I1213 11:22:53.454639  507417 logs.go:282] 0 containers: []
	W1213 11:22:53.454647  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:22:53.454653  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:22:53.454793  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:22:53.480876  507417 cri.go:89] found id: ""
	I1213 11:22:53.480901  507417 logs.go:282] 0 containers: []
	W1213 11:22:53.480911  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:22:53.480926  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:22:53.480938  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:22:53.545402  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:22:53.545447  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:22:53.562279  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:22:53.562315  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:22:53.633094  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:22:53.633113  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:22:53.633126  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:53.679141  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:22:53.679177  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:53.735989  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:22:53.736028  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:53.780767  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:22:53.780799  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:53.820457  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:22:53.820499  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:22:53.857420  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:22:53.857456  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:22:56.398830  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:22:56.409280  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:22:56.409350  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:22:56.435427  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:56.435446  507417 cri.go:89] found id: ""
	I1213 11:22:56.435458  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:22:56.435513  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:56.439303  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:22:56.439374  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:22:56.469225  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:56.469250  507417 cri.go:89] found id: ""
	I1213 11:22:56.469259  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:22:56.469319  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:56.473382  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:22:56.473454  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:22:56.499793  507417 cri.go:89] found id: ""
	I1213 11:22:56.499816  507417 logs.go:282] 0 containers: []
	W1213 11:22:56.499825  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:22:56.499832  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:22:56.499894  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:22:56.526151  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:56.526175  507417 cri.go:89] found id: ""
	I1213 11:22:56.526184  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:22:56.526243  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:56.530003  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:22:56.530086  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:22:56.555428  507417 cri.go:89] found id: ""
	I1213 11:22:56.555453  507417 logs.go:282] 0 containers: []
	W1213 11:22:56.555462  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:22:56.555469  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:22:56.555530  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:22:56.582151  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:56.582175  507417 cri.go:89] found id: ""
	I1213 11:22:56.582184  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:22:56.582239  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:56.586003  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:22:56.586078  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:22:56.611205  507417 cri.go:89] found id: ""
	I1213 11:22:56.611232  507417 logs.go:282] 0 containers: []
	W1213 11:22:56.611240  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:22:56.611247  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:22:56.611303  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:22:56.636393  507417 cri.go:89] found id: ""
	I1213 11:22:56.636419  507417 logs.go:282] 0 containers: []
	W1213 11:22:56.636428  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:22:56.636443  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:22:56.636456  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:56.674032  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:22:56.674111  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:56.712117  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:22:56.712148  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:22:56.741541  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:22:56.741581  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:22:56.769633  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:22:56.769661  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:22:56.830454  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:22:56.830491  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:22:56.847316  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:22:56.847347  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:22:56.915930  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:22:56.915948  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:22:56.915975  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:56.949278  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:22:56.949309  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:59.489323  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:22:59.500847  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:22:59.500913  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:22:59.533567  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:22:59.533587  507417 cri.go:89] found id: ""
	I1213 11:22:59.533596  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:22:59.533649  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:59.538034  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:22:59.538103  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:22:59.565658  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:59.565677  507417 cri.go:89] found id: ""
	I1213 11:22:59.565688  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:22:59.565742  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:59.570060  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:22:59.570127  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:22:59.616581  507417 cri.go:89] found id: ""
	I1213 11:22:59.616656  507417 logs.go:282] 0 containers: []
	W1213 11:22:59.616691  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:22:59.616713  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:22:59.616807  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:22:59.641742  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:59.641763  507417 cri.go:89] found id: ""
	I1213 11:22:59.641771  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:22:59.641831  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:59.645430  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:22:59.645498  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:22:59.674562  507417 cri.go:89] found id: ""
	I1213 11:22:59.674589  507417 logs.go:282] 0 containers: []
	W1213 11:22:59.674598  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:22:59.674605  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:22:59.674676  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:22:59.699233  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:59.699252  507417 cri.go:89] found id: ""
	I1213 11:22:59.699261  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:22:59.699318  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:22:59.703205  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:22:59.703276  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:22:59.734553  507417 cri.go:89] found id: ""
	I1213 11:22:59.734579  507417 logs.go:282] 0 containers: []
	W1213 11:22:59.734588  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:22:59.734594  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:22:59.734652  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:22:59.759652  507417 cri.go:89] found id: ""
	I1213 11:22:59.759677  507417 logs.go:282] 0 containers: []
	W1213 11:22:59.759686  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:22:59.759703  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:22:59.759714  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:22:59.820978  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:22:59.821038  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:22:59.838585  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:22:59.838713  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:22:59.873072  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:22:59.873104  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:22:59.905490  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:22:59.905524  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:22:59.939187  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:22:59.939264  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:22:59.970538  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:22:59.970616  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:23:00.067435  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:23:00.067458  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:23:00.067474  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:00.211134  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:23:00.211185  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:23:02.774622  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:02.786866  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:23:02.786945  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:23:02.811931  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:02.812008  507417 cri.go:89] found id: ""
	I1213 11:23:02.812032  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:23:02.812122  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:02.816021  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:23:02.816104  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:23:02.842241  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:02.842306  507417 cri.go:89] found id: ""
	I1213 11:23:02.842328  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:23:02.842414  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:02.846218  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:23:02.846333  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:23:02.872025  507417 cri.go:89] found id: ""
	I1213 11:23:02.872053  507417 logs.go:282] 0 containers: []
	W1213 11:23:02.872062  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:23:02.872069  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:23:02.872129  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:23:02.896663  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:02.896688  507417 cri.go:89] found id: ""
	I1213 11:23:02.896698  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:23:02.896753  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:02.900612  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:23:02.900705  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:23:02.936098  507417 cri.go:89] found id: ""
	I1213 11:23:02.936174  507417 logs.go:282] 0 containers: []
	W1213 11:23:02.936197  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:23:02.936218  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:23:02.936329  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:23:02.964637  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:02.964714  507417 cri.go:89] found id: ""
	I1213 11:23:02.964736  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:23:02.964826  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:02.969307  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:23:02.969434  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:23:02.995436  507417 cri.go:89] found id: ""
	I1213 11:23:02.995473  507417 logs.go:282] 0 containers: []
	W1213 11:23:02.995489  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:23:02.995495  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:23:02.995557  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:23:03.027999  507417 cri.go:89] found id: ""
	I1213 11:23:03.028025  507417 logs.go:282] 0 containers: []
	W1213 11:23:03.028034  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:23:03.028047  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:23:03.028061  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:23:03.045299  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:23:03.045382  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:23:03.120910  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:23:03.120932  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:23:03.120946  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:03.155670  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:23:03.155702  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:03.188529  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:23:03.188565  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:23:03.218186  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:23:03.218225  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:23:03.280185  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:23:03.280222  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:03.312800  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:23:03.312837  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:03.343034  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:23:03.343066  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:23:05.873043  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:05.883079  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:23:05.883156  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:23:05.908490  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:05.908513  507417 cri.go:89] found id: ""
	I1213 11:23:05.908521  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:23:05.908574  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:05.915236  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:23:05.915304  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:23:05.947929  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:05.947948  507417 cri.go:89] found id: ""
	I1213 11:23:05.947956  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:23:05.948008  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:05.952251  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:23:05.952320  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:23:05.979889  507417 cri.go:89] found id: ""
	I1213 11:23:05.979916  507417 logs.go:282] 0 containers: []
	W1213 11:23:05.979925  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:23:05.979932  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:23:05.979989  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:23:06.007393  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:06.007414  507417 cri.go:89] found id: ""
	I1213 11:23:06.007424  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:23:06.007496  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:06.012392  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:23:06.012531  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:23:06.041208  507417 cri.go:89] found id: ""
	I1213 11:23:06.041232  507417 logs.go:282] 0 containers: []
	W1213 11:23:06.041246  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:23:06.041252  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:23:06.041313  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:23:06.067781  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:06.067805  507417 cri.go:89] found id: ""
	I1213 11:23:06.067814  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:23:06.067870  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:06.071698  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:23:06.071791  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:23:06.101542  507417 cri.go:89] found id: ""
	I1213 11:23:06.101569  507417 logs.go:282] 0 containers: []
	W1213 11:23:06.101577  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:23:06.101584  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:23:06.101642  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:23:06.127625  507417 cri.go:89] found id: ""
	I1213 11:23:06.127652  507417 logs.go:282] 0 containers: []
	W1213 11:23:06.127661  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:23:06.127697  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:23:06.127718  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:06.161722  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:23:06.161756  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:06.198751  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:23:06.198787  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:06.232087  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:23:06.232121  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:06.266579  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:23:06.266661  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:23:06.299392  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:23:06.299437  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:23:06.329128  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:23:06.329162  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:23:06.387258  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:23:06.387291  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:23:06.404253  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:23:06.404283  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:23:06.474289  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:23:08.974804  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:08.992850  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:23:08.992922  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:23:09.027394  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:09.027415  507417 cri.go:89] found id: ""
	I1213 11:23:09.027423  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:23:09.027489  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:09.032059  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:23:09.032129  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:23:09.060053  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:09.060130  507417 cri.go:89] found id: ""
	I1213 11:23:09.060152  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:23:09.060237  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:09.064581  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:23:09.064698  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:23:09.098825  507417 cri.go:89] found id: ""
	I1213 11:23:09.098909  507417 logs.go:282] 0 containers: []
	W1213 11:23:09.098930  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:23:09.098947  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:23:09.099054  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:23:09.128122  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:09.128196  507417 cri.go:89] found id: ""
	I1213 11:23:09.128238  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:23:09.128327  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:09.132626  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:23:09.132747  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:23:09.171178  507417 cri.go:89] found id: ""
	I1213 11:23:09.171205  507417 logs.go:282] 0 containers: []
	W1213 11:23:09.171214  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:23:09.171221  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:23:09.171287  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:23:09.206655  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:09.206679  507417 cri.go:89] found id: ""
	I1213 11:23:09.206717  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:23:09.206777  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:09.210993  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:23:09.211078  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:23:09.236788  507417 cri.go:89] found id: ""
	I1213 11:23:09.236815  507417 logs.go:282] 0 containers: []
	W1213 11:23:09.236824  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:23:09.236830  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:23:09.236894  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:23:09.267458  507417 cri.go:89] found id: ""
	I1213 11:23:09.267496  507417 logs.go:282] 0 containers: []
	W1213 11:23:09.267505  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:23:09.267521  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:23:09.267533  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:23:09.332731  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:23:09.332768  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:23:09.349919  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:23:09.349955  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:23:09.433649  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:23:09.433670  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:23:09.433683  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:09.468947  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:23:09.468982  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:09.500347  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:23:09.500378  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:09.534534  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:23:09.534565  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:09.567488  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:23:09.567521  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:23:09.597769  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:23:09.597802  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:23:12.140112  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:12.150123  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:23:12.150190  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:23:12.177458  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:12.177486  507417 cri.go:89] found id: ""
	I1213 11:23:12.177501  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:23:12.177573  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:12.182416  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:23:12.182490  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:23:12.207125  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:12.207145  507417 cri.go:89] found id: ""
	I1213 11:23:12.207153  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:23:12.207212  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:12.210874  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:23:12.210946  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:23:12.234294  507417 cri.go:89] found id: ""
	I1213 11:23:12.234321  507417 logs.go:282] 0 containers: []
	W1213 11:23:12.234329  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:23:12.234336  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:23:12.234393  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:23:12.258986  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:12.259007  507417 cri.go:89] found id: ""
	I1213 11:23:12.259016  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:23:12.259070  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:12.262666  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:23:12.262772  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:23:12.286947  507417 cri.go:89] found id: ""
	I1213 11:23:12.286973  507417 logs.go:282] 0 containers: []
	W1213 11:23:12.286990  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:23:12.286997  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:23:12.287055  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:23:12.313408  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:12.313434  507417 cri.go:89] found id: ""
	I1213 11:23:12.313443  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:23:12.313498  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:12.317409  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:23:12.317486  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:23:12.344721  507417 cri.go:89] found id: ""
	I1213 11:23:12.344748  507417 logs.go:282] 0 containers: []
	W1213 11:23:12.344758  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:23:12.344763  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:23:12.344821  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:23:12.385647  507417 cri.go:89] found id: ""
	I1213 11:23:12.385675  507417 logs.go:282] 0 containers: []
	W1213 11:23:12.385684  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:23:12.385697  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:23:12.385708  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:23:12.444566  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:23:12.444602  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:12.479161  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:23:12.479198  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:12.517875  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:23:12.517907  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:12.546730  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:23:12.546761  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:23:12.563615  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:23:12.563648  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:23:12.632421  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:23:12.632488  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:23:12.632511  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:12.665641  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:23:12.665718  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:23:12.698660  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:23:12.698720  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:23:15.249292  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:15.259670  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:23:15.259739  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:23:15.285869  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:15.285894  507417 cri.go:89] found id: ""
	I1213 11:23:15.285903  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:23:15.285962  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:15.289685  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:23:15.289756  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:23:15.314030  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:15.314053  507417 cri.go:89] found id: ""
	I1213 11:23:15.314064  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:23:15.314122  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:15.317869  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:23:15.317943  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:23:15.343493  507417 cri.go:89] found id: ""
	I1213 11:23:15.343562  507417 logs.go:282] 0 containers: []
	W1213 11:23:15.343577  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:23:15.343583  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:23:15.343680  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:23:15.369401  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:15.369424  507417 cri.go:89] found id: ""
	I1213 11:23:15.369432  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:23:15.369486  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:15.373269  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:23:15.373349  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:23:15.398528  507417 cri.go:89] found id: ""
	I1213 11:23:15.398551  507417 logs.go:282] 0 containers: []
	W1213 11:23:15.398559  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:23:15.398565  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:23:15.398622  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:23:15.434725  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:15.434790  507417 cri.go:89] found id: ""
	I1213 11:23:15.434814  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:23:15.434909  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:15.439585  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:23:15.439671  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:23:15.480418  507417 cri.go:89] found id: ""
	I1213 11:23:15.480452  507417 logs.go:282] 0 containers: []
	W1213 11:23:15.480461  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:23:15.480468  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:23:15.480536  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:23:15.517318  507417 cri.go:89] found id: ""
	I1213 11:23:15.517341  507417 logs.go:282] 0 containers: []
	W1213 11:23:15.517350  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:23:15.517364  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:23:15.517377  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:15.561826  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:23:15.561863  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:15.600131  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:23:15.600161  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:23:15.632185  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:23:15.632218  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:23:15.692765  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:23:15.692844  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:23:15.820494  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:23:15.820518  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:23:15.820535  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:23:15.887832  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:23:15.887871  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:23:15.906103  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:23:15.906132  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:15.955105  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:23:15.955142  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:18.503219  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:18.514614  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:23:18.514710  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:23:18.543487  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:18.543509  507417 cri.go:89] found id: ""
	I1213 11:23:18.543517  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:23:18.543574  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:18.547594  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:23:18.547665  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:23:18.572762  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:18.572784  507417 cri.go:89] found id: ""
	I1213 11:23:18.572792  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:23:18.572849  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:18.576499  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:23:18.576576  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:23:18.601442  507417 cri.go:89] found id: ""
	I1213 11:23:18.601468  507417 logs.go:282] 0 containers: []
	W1213 11:23:18.601476  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:23:18.601484  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:23:18.601544  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:23:18.629331  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:18.629352  507417 cri.go:89] found id: ""
	I1213 11:23:18.629361  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:23:18.629416  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:18.633170  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:23:18.633271  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:23:18.658292  507417 cri.go:89] found id: ""
	I1213 11:23:18.658316  507417 logs.go:282] 0 containers: []
	W1213 11:23:18.658324  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:23:18.658331  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:23:18.658388  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:23:18.691107  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:18.691186  507417 cri.go:89] found id: ""
	I1213 11:23:18.691207  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:23:18.691296  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:18.695550  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:23:18.695619  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:23:18.721319  507417 cri.go:89] found id: ""
	I1213 11:23:18.721343  507417 logs.go:282] 0 containers: []
	W1213 11:23:18.721351  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:23:18.721358  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:23:18.721417  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:23:18.749788  507417 cri.go:89] found id: ""
	I1213 11:23:18.749811  507417 logs.go:282] 0 containers: []
	W1213 11:23:18.749820  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:23:18.749851  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:23:18.749865  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:23:18.766657  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:23:18.766755  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:18.801667  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:23:18.801741  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:18.833843  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:23:18.833874  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:23:18.864748  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:23:18.864783  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:23:18.894628  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:23:18.894658  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:23:18.954641  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:23:18.954675  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:23:19.022754  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:23:19.022789  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:23:19.022803  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:19.062939  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:23:19.063019  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:21.595070  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:21.606508  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:23:21.606601  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:23:21.635130  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:21.635152  507417 cri.go:89] found id: ""
	I1213 11:23:21.635160  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:23:21.635218  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:21.639037  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:23:21.639109  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:23:21.670466  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:21.670492  507417 cri.go:89] found id: ""
	I1213 11:23:21.670502  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:23:21.670559  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:21.675194  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:23:21.675270  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:23:21.708905  507417 cri.go:89] found id: ""
	I1213 11:23:21.708928  507417 logs.go:282] 0 containers: []
	W1213 11:23:21.708937  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:23:21.708943  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:23:21.709004  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:23:21.743950  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:21.743979  507417 cri.go:89] found id: ""
	I1213 11:23:21.743988  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:23:21.744088  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:21.747929  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:23:21.748000  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:23:21.773860  507417 cri.go:89] found id: ""
	I1213 11:23:21.773927  507417 logs.go:282] 0 containers: []
	W1213 11:23:21.773950  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:23:21.773967  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:23:21.774049  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:23:21.798627  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:21.798728  507417 cri.go:89] found id: ""
	I1213 11:23:21.798753  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:23:21.798833  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:21.802454  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:23:21.802520  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:23:21.831124  507417 cri.go:89] found id: ""
	I1213 11:23:21.831150  507417 logs.go:282] 0 containers: []
	W1213 11:23:21.831159  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:23:21.831166  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:23:21.831238  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:23:21.855963  507417 cri.go:89] found id: ""
	I1213 11:23:21.855990  507417 logs.go:282] 0 containers: []
	W1213 11:23:21.855999  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:23:21.856015  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:23:21.856031  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:21.897624  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:23:21.897657  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:21.935464  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:23:21.935497  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:21.967406  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:23:21.967436  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:23:22.030921  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:23:22.030958  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:23:22.048593  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:23:22.048625  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:23:22.116032  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:23:22.116056  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:23:22.116070  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:23:22.147176  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:23:22.147209  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:23:22.180132  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:23:22.180162  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:24.714855  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:24.725603  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:23:24.725675  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:23:24.755523  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:24.755548  507417 cri.go:89] found id: ""
	I1213 11:23:24.755558  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:23:24.755623  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:24.759810  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:23:24.759885  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:23:24.786394  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:24.786413  507417 cri.go:89] found id: ""
	I1213 11:23:24.786422  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:23:24.786477  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:24.790535  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:23:24.790608  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:23:24.814972  507417 cri.go:89] found id: ""
	I1213 11:23:24.814994  507417 logs.go:282] 0 containers: []
	W1213 11:23:24.815003  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:23:24.815009  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:23:24.815066  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:23:24.841268  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:24.841287  507417 cri.go:89] found id: ""
	I1213 11:23:24.841296  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:23:24.841361  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:24.845145  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:23:24.845215  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:23:24.873879  507417 cri.go:89] found id: ""
	I1213 11:23:24.873902  507417 logs.go:282] 0 containers: []
	W1213 11:23:24.873911  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:23:24.873917  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:23:24.873982  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:23:24.899961  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:24.899981  507417 cri.go:89] found id: ""
	I1213 11:23:24.899989  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:23:24.900046  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:24.903987  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:23:24.904090  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:23:24.932208  507417 cri.go:89] found id: ""
	I1213 11:23:24.932239  507417 logs.go:282] 0 containers: []
	W1213 11:23:24.932248  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:23:24.932256  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:23:24.932320  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:23:24.962011  507417 cri.go:89] found id: ""
	I1213 11:23:24.962084  507417 logs.go:282] 0 containers: []
	W1213 11:23:24.962106  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:23:24.962134  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:23:24.962172  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:23:25.023277  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:23:25.023318  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:23:25.040342  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:23:25.040369  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:23:25.106146  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:23:25.106171  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:23:25.106185  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:25.139962  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:23:25.139993  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:25.172092  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:23:25.172127  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:25.207025  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:23:25.207056  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:25.236366  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:23:25.236401  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:23:25.268976  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:23:25.269067  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:23:27.805880  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:27.816088  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:23:27.816160  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:23:27.855651  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:27.855675  507417 cri.go:89] found id: ""
	I1213 11:23:27.855685  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:23:27.855739  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:27.859394  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:23:27.859464  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:23:27.883732  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:27.883754  507417 cri.go:89] found id: ""
	I1213 11:23:27.883763  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:23:27.883817  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:27.887678  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:23:27.887747  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:23:27.912497  507417 cri.go:89] found id: ""
	I1213 11:23:27.912521  507417 logs.go:282] 0 containers: []
	W1213 11:23:27.912531  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:23:27.912538  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:23:27.912602  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:23:27.936966  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:27.936984  507417 cri.go:89] found id: ""
	I1213 11:23:27.936993  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:23:27.937051  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:27.941018  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:23:27.941088  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:23:27.965361  507417 cri.go:89] found id: ""
	I1213 11:23:27.965386  507417 logs.go:282] 0 containers: []
	W1213 11:23:27.965394  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:23:27.965401  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:23:27.965458  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:23:27.990234  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:27.990257  507417 cri.go:89] found id: ""
	I1213 11:23:27.990266  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:23:27.990320  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:27.994047  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:23:27.994167  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:23:28.021645  507417 cri.go:89] found id: ""
	I1213 11:23:28.021719  507417 logs.go:282] 0 containers: []
	W1213 11:23:28.021745  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:23:28.021764  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:23:28.021864  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:23:28.051909  507417 cri.go:89] found id: ""
	I1213 11:23:28.051932  507417 logs.go:282] 0 containers: []
	W1213 11:23:28.051941  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:23:28.051957  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:23:28.051970  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:23:28.110344  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:23:28.110381  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:28.146647  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:23:28.146676  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:28.179396  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:23:28.179432  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:28.211522  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:23:28.211553  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:23:28.247281  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:23:28.247312  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:23:28.264573  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:23:28.264604  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:23:28.329408  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:23:28.329432  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:23:28.329445  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:28.361418  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:23:28.361450  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:23:30.892390  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:30.902609  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:23:30.902678  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:23:30.928514  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:30.928537  507417 cri.go:89] found id: ""
	I1213 11:23:30.928546  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:23:30.928624  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:30.932537  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:23:30.932608  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:23:30.959008  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:30.959032  507417 cri.go:89] found id: ""
	I1213 11:23:30.959041  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:23:30.959099  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:30.963338  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:23:30.963409  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:23:30.992631  507417 cri.go:89] found id: ""
	I1213 11:23:30.992659  507417 logs.go:282] 0 containers: []
	W1213 11:23:30.992668  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:23:30.992675  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:23:30.992738  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:23:31.020537  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:31.020562  507417 cri.go:89] found id: ""
	I1213 11:23:31.020570  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:23:31.020635  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:31.024530  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:23:31.024602  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:23:31.050396  507417 cri.go:89] found id: ""
	I1213 11:23:31.050420  507417 logs.go:282] 0 containers: []
	W1213 11:23:31.050429  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:23:31.050435  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:23:31.050545  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:23:31.077387  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:31.077409  507417 cri.go:89] found id: ""
	I1213 11:23:31.077418  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:23:31.077479  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:31.081738  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:23:31.081864  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:23:31.107681  507417 cri.go:89] found id: ""
	I1213 11:23:31.107708  507417 logs.go:282] 0 containers: []
	W1213 11:23:31.107718  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:23:31.107724  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:23:31.107819  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:23:31.133563  507417 cri.go:89] found id: ""
	I1213 11:23:31.133588  507417 logs.go:282] 0 containers: []
	W1213 11:23:31.133597  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:23:31.133640  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:23:31.133660  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:31.162762  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:23:31.162795  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:23:31.195815  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:23:31.195854  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:23:31.255700  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:23:31.255737  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:23:31.324133  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:23:31.324153  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:23:31.324166  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:31.363875  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:23:31.363907  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:23:31.393939  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:23:31.393967  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:23:31.413618  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:23:31.413702  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:31.467253  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:23:31.467326  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:34.010095  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:34.026088  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:23:34.026164  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:23:34.063262  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:34.063286  507417 cri.go:89] found id: ""
	I1213 11:23:34.063294  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:23:34.063355  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:34.068261  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:23:34.068340  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:23:34.097391  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:34.097418  507417 cri.go:89] found id: ""
	I1213 11:23:34.097428  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:23:34.097486  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:34.101531  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:23:34.101604  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:23:34.128843  507417 cri.go:89] found id: ""
	I1213 11:23:34.128866  507417 logs.go:282] 0 containers: []
	W1213 11:23:34.128875  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:23:34.128882  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:23:34.128941  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:23:34.154381  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:34.154406  507417 cri.go:89] found id: ""
	I1213 11:23:34.154415  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:23:34.154472  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:34.158349  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:23:34.158473  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:23:34.185083  507417 cri.go:89] found id: ""
	I1213 11:23:34.185116  507417 logs.go:282] 0 containers: []
	W1213 11:23:34.185125  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:23:34.185131  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:23:34.185198  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:23:34.210672  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:34.210727  507417 cri.go:89] found id: ""
	I1213 11:23:34.210737  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:23:34.210834  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:34.214646  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:23:34.214741  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:23:34.241158  507417 cri.go:89] found id: ""
	I1213 11:23:34.241184  507417 logs.go:282] 0 containers: []
	W1213 11:23:34.241201  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:23:34.241208  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:23:34.241275  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:23:34.266836  507417 cri.go:89] found id: ""
	I1213 11:23:34.266859  507417 logs.go:282] 0 containers: []
	W1213 11:23:34.266868  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:23:34.266885  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:23:34.266897  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:23:34.296163  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:23:34.296195  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:23:34.312888  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:23:34.312918  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:34.345658  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:23:34.345694  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:23:34.376082  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:23:34.376116  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:23:34.442699  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:23:34.442735  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:23:34.515924  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:23:34.515944  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:23:34.515957  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:34.553769  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:23:34.553802  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:34.588170  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:23:34.588205  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:37.118859  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:37.130357  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:23:37.130450  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:23:37.164317  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:37.164347  507417 cri.go:89] found id: ""
	I1213 11:23:37.164357  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:23:37.164432  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:37.168695  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:23:37.168813  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:23:37.207049  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:37.207094  507417 cri.go:89] found id: ""
	I1213 11:23:37.207142  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:23:37.207231  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:37.211498  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:23:37.211598  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:23:37.240224  507417 cri.go:89] found id: ""
	I1213 11:23:37.240251  507417 logs.go:282] 0 containers: []
	W1213 11:23:37.240261  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:23:37.240271  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:23:37.240380  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:23:37.272559  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:37.272581  507417 cri.go:89] found id: ""
	I1213 11:23:37.272590  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:23:37.272675  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:37.276656  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:23:37.276750  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:23:37.302456  507417 cri.go:89] found id: ""
	I1213 11:23:37.302481  507417 logs.go:282] 0 containers: []
	W1213 11:23:37.302491  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:23:37.302526  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:23:37.302606  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:23:37.331988  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:37.332011  507417 cri.go:89] found id: ""
	I1213 11:23:37.332020  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:23:37.332109  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:37.336035  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:23:37.336102  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:23:37.365763  507417 cri.go:89] found id: ""
	I1213 11:23:37.365802  507417 logs.go:282] 0 containers: []
	W1213 11:23:37.365812  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:23:37.365826  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:23:37.365892  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:23:37.391317  507417 cri.go:89] found id: ""
	I1213 11:23:37.391340  507417 logs.go:282] 0 containers: []
	W1213 11:23:37.391349  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:23:37.391363  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:23:37.391375  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:23:37.407712  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:23:37.407744  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:23:37.495195  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:23:37.495220  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:23:37.495237  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:37.537138  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:23:37.537171  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:37.569793  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:23:37.569827  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:37.601182  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:23:37.601212  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:23:37.643705  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:23:37.643734  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:23:37.701270  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:23:37.701307  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:37.734792  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:23:37.734826  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:23:40.267910  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:40.278500  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:23:40.278570  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:23:40.317003  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:40.317022  507417 cri.go:89] found id: ""
	I1213 11:23:40.317030  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:23:40.317087  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:40.321386  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:23:40.321519  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:23:40.362768  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:40.362841  507417 cri.go:89] found id: ""
	I1213 11:23:40.362873  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:23:40.362952  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:40.367540  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:23:40.367660  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:23:40.401670  507417 cri.go:89] found id: ""
	I1213 11:23:40.401748  507417 logs.go:282] 0 containers: []
	W1213 11:23:40.401773  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:23:40.401799  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:23:40.401914  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:23:40.446026  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:40.446098  507417 cri.go:89] found id: ""
	I1213 11:23:40.446122  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:23:40.446199  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:40.452221  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:23:40.452381  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:23:40.515973  507417 cri.go:89] found id: ""
	I1213 11:23:40.516045  507417 logs.go:282] 0 containers: []
	W1213 11:23:40.516070  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:23:40.516089  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:23:40.516171  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:23:40.551944  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:40.551965  507417 cri.go:89] found id: ""
	I1213 11:23:40.551974  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:23:40.552045  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:40.557737  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:23:40.557843  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:23:40.588914  507417 cri.go:89] found id: ""
	I1213 11:23:40.588939  507417 logs.go:282] 0 containers: []
	W1213 11:23:40.588948  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:23:40.588974  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:23:40.589038  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:23:40.621015  507417 cri.go:89] found id: ""
	I1213 11:23:40.621040  507417 logs.go:282] 0 containers: []
	W1213 11:23:40.621049  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:23:40.621083  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:23:40.621099  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:40.655712  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:23:40.655742  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:40.698444  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:23:40.698519  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:40.748361  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:23:40.748432  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:23:40.786801  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:23:40.786877  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:23:40.826580  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:23:40.826656  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:23:40.889631  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:23:40.889669  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:23:40.907827  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:23:40.907857  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:23:40.973420  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:23:40.973483  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:23:40.973511  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:43.508903  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:43.520444  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:23:43.520519  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:23:43.553645  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:43.553671  507417 cri.go:89] found id: ""
	I1213 11:23:43.553679  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:23:43.553736  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:43.558166  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:23:43.558244  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:23:43.592933  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:43.592958  507417 cri.go:89] found id: ""
	I1213 11:23:43.592974  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:23:43.593028  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:43.597546  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:23:43.597629  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:23:43.633173  507417 cri.go:89] found id: ""
	I1213 11:23:43.633192  507417 logs.go:282] 0 containers: []
	W1213 11:23:43.633199  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:23:43.633205  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:23:43.633282  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:23:43.671377  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:43.671411  507417 cri.go:89] found id: ""
	I1213 11:23:43.671421  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:23:43.671484  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:43.675950  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:23:43.676038  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:23:43.705588  507417 cri.go:89] found id: ""
	I1213 11:23:43.705618  507417 logs.go:282] 0 containers: []
	W1213 11:23:43.705626  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:23:43.705633  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:23:43.705693  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:23:43.734373  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:43.734409  507417 cri.go:89] found id: ""
	I1213 11:23:43.734418  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:23:43.734479  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:43.738942  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:23:43.739035  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:23:43.770267  507417 cri.go:89] found id: ""
	I1213 11:23:43.770312  507417 logs.go:282] 0 containers: []
	W1213 11:23:43.770322  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:23:43.770329  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:23:43.770395  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:23:43.804490  507417 cri.go:89] found id: ""
	I1213 11:23:43.804513  507417 logs.go:282] 0 containers: []
	W1213 11:23:43.804522  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:23:43.804537  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:23:43.804548  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:23:43.877740  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:23:43.877783  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:23:43.898129  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:23:43.898170  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:43.942935  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:23:43.942966  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:43.993858  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:23:43.993889  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:23:44.092562  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:23:44.092582  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:23:44.092595  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:44.132662  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:23:44.132968  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:44.215385  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:23:44.215474  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:23:44.250831  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:23:44.250911  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:23:46.827426  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:46.837429  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:23:46.837501  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:23:46.862870  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:46.862892  507417 cri.go:89] found id: ""
	I1213 11:23:46.862901  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:23:46.862959  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:46.866715  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:23:46.866788  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:23:46.892761  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:46.892794  507417 cri.go:89] found id: ""
	I1213 11:23:46.892802  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:23:46.892859  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:46.896823  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:23:46.896898  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:23:46.921908  507417 cri.go:89] found id: ""
	I1213 11:23:46.921937  507417 logs.go:282] 0 containers: []
	W1213 11:23:46.921945  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:23:46.921952  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:23:46.922014  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:23:46.948150  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:46.948170  507417 cri.go:89] found id: ""
	I1213 11:23:46.948179  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:23:46.948240  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:46.952205  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:23:46.952281  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:23:46.978462  507417 cri.go:89] found id: ""
	I1213 11:23:46.978489  507417 logs.go:282] 0 containers: []
	W1213 11:23:46.978498  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:23:46.978505  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:23:46.978570  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:23:47.015238  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:47.015261  507417 cri.go:89] found id: ""
	I1213 11:23:47.015280  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:23:47.015337  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:47.019236  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:23:47.019308  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:23:47.046670  507417 cri.go:89] found id: ""
	I1213 11:23:47.046721  507417 logs.go:282] 0 containers: []
	W1213 11:23:47.046731  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:23:47.046738  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:23:47.046799  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:23:47.083908  507417 cri.go:89] found id: ""
	I1213 11:23:47.083931  507417 logs.go:282] 0 containers: []
	W1213 11:23:47.083940  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:23:47.083957  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:23:47.083968  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:23:47.186812  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:23:47.186835  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:23:47.186848  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:47.238496  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:23:47.238673  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:23:47.305858  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:23:47.305928  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:23:47.372898  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:23:47.372964  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:23:47.392402  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:23:47.392429  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:47.428681  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:23:47.428713  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:47.491750  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:23:47.491783  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:47.536962  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:23:47.536999  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:23:50.072498  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:50.083653  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:23:50.083728  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:23:50.115426  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:50.115448  507417 cri.go:89] found id: ""
	I1213 11:23:50.115457  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:23:50.115520  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:50.119728  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:23:50.119804  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:23:50.147394  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:50.147420  507417 cri.go:89] found id: ""
	I1213 11:23:50.147429  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:23:50.147489  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:50.151729  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:23:50.151816  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:23:50.188589  507417 cri.go:89] found id: ""
	I1213 11:23:50.188617  507417 logs.go:282] 0 containers: []
	W1213 11:23:50.188627  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:23:50.188635  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:23:50.188708  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:23:50.226416  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:50.226441  507417 cri.go:89] found id: ""
	I1213 11:23:50.226449  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:23:50.226504  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:50.230870  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:23:50.230950  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:23:50.261904  507417 cri.go:89] found id: ""
	I1213 11:23:50.261930  507417 logs.go:282] 0 containers: []
	W1213 11:23:50.261942  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:23:50.261950  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:23:50.262010  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:23:50.288131  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:50.288155  507417 cri.go:89] found id: ""
	I1213 11:23:50.288164  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:23:50.288228  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:50.292317  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:23:50.292415  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:23:50.317587  507417 cri.go:89] found id: ""
	I1213 11:23:50.317608  507417 logs.go:282] 0 containers: []
	W1213 11:23:50.317617  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:23:50.317624  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:23:50.317686  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:23:50.342192  507417 cri.go:89] found id: ""
	I1213 11:23:50.342216  507417 logs.go:282] 0 containers: []
	W1213 11:23:50.342225  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:23:50.342241  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:23:50.342254  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:23:50.358387  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:23:50.358418  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:50.392415  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:23:50.392446  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:50.426022  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:23:50.426052  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:50.455039  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:23:50.455069  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:23:50.485592  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:23:50.485620  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:23:50.543584  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:23:50.543655  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:23:50.617273  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:23:50.617345  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:23:50.617372  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:50.662339  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:23:50.662444  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:23:53.217733  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:53.227709  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:23:53.227782  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:23:53.252319  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:53.252342  507417 cri.go:89] found id: ""
	I1213 11:23:53.252351  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:23:53.252408  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:53.256193  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:23:53.256266  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:23:53.283317  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:53.283338  507417 cri.go:89] found id: ""
	I1213 11:23:53.283347  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:23:53.283403  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:53.287318  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:23:53.287392  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:23:53.315361  507417 cri.go:89] found id: ""
	I1213 11:23:53.315384  507417 logs.go:282] 0 containers: []
	W1213 11:23:53.315393  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:23:53.315399  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:23:53.315458  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:23:53.340454  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:53.340484  507417 cri.go:89] found id: ""
	I1213 11:23:53.340494  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:23:53.340551  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:53.344298  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:23:53.344370  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:23:53.370102  507417 cri.go:89] found id: ""
	I1213 11:23:53.370181  507417 logs.go:282] 0 containers: []
	W1213 11:23:53.370204  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:23:53.370222  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:23:53.370308  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:23:53.404662  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:53.404683  507417 cri.go:89] found id: ""
	I1213 11:23:53.404691  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:23:53.404753  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:53.408502  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:23:53.408589  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:23:53.434383  507417 cri.go:89] found id: ""
	I1213 11:23:53.434458  507417 logs.go:282] 0 containers: []
	W1213 11:23:53.434483  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:23:53.434502  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:23:53.434581  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:23:53.460394  507417 cri.go:89] found id: ""
	I1213 11:23:53.460418  507417 logs.go:282] 0 containers: []
	W1213 11:23:53.460427  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:23:53.460441  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:23:53.460454  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:23:53.519399  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:23:53.519438  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:53.558521  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:23:53.558586  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:53.591473  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:23:53.591506  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:23:53.625573  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:23:53.625601  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:23:53.646897  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:23:53.646940  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:23:53.723757  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:23:53.723780  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:23:53.723799  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:53.772938  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:23:53.772974  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:53.801624  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:23:53.801659  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:23:56.331731  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:56.342805  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:23:56.342888  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:23:56.368189  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:56.368211  507417 cri.go:89] found id: ""
	I1213 11:23:56.368220  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:23:56.368277  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:56.372143  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:23:56.372213  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:23:56.400761  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:56.400784  507417 cri.go:89] found id: ""
	I1213 11:23:56.400793  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:23:56.400850  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:56.404617  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:23:56.404695  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:23:56.428717  507417 cri.go:89] found id: ""
	I1213 11:23:56.428742  507417 logs.go:282] 0 containers: []
	W1213 11:23:56.428751  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:23:56.428758  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:23:56.428819  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:23:56.454016  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:56.454037  507417 cri.go:89] found id: ""
	I1213 11:23:56.454045  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:23:56.454099  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:56.457709  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:23:56.457780  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:23:56.486768  507417 cri.go:89] found id: ""
	I1213 11:23:56.486795  507417 logs.go:282] 0 containers: []
	W1213 11:23:56.486804  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:23:56.486811  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:23:56.486870  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:23:56.513277  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:56.513300  507417 cri.go:89] found id: ""
	I1213 11:23:56.513309  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:23:56.513368  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:56.517302  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:23:56.517399  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:23:56.542369  507417 cri.go:89] found id: ""
	I1213 11:23:56.542396  507417 logs.go:282] 0 containers: []
	W1213 11:23:56.542405  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:23:56.542413  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:23:56.542476  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:23:56.569207  507417 cri.go:89] found id: ""
	I1213 11:23:56.569235  507417 logs.go:282] 0 containers: []
	W1213 11:23:56.569245  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:23:56.569278  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:23:56.569295  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:23:56.599427  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:23:56.599463  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:23:56.673950  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:23:56.674024  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:23:56.674052  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:56.710306  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:23:56.710341  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:56.738604  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:23:56.738632  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:23:56.769850  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:23:56.769879  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:23:56.828066  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:23:56.828103  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:23:56.845213  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:23:56.845246  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:56.877645  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:23:56.877678  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:59.411028  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:59.421363  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:23:59.421437  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:23:59.448802  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:23:59.448821  507417 cri.go:89] found id: ""
	I1213 11:23:59.448829  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:23:59.448882  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:59.452665  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:23:59.452736  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:23:59.478440  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:59.478462  507417 cri.go:89] found id: ""
	I1213 11:23:59.478471  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:23:59.478533  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:59.482370  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:23:59.482453  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:23:59.508943  507417 cri.go:89] found id: ""
	I1213 11:23:59.508971  507417 logs.go:282] 0 containers: []
	W1213 11:23:59.508981  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:23:59.508988  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:23:59.509047  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:23:59.534752  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:59.534775  507417 cri.go:89] found id: ""
	I1213 11:23:59.534786  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:23:59.534844  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:59.538965  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:23:59.539074  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:23:59.563742  507417 cri.go:89] found id: ""
	I1213 11:23:59.563764  507417 logs.go:282] 0 containers: []
	W1213 11:23:59.563773  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:23:59.563779  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:23:59.563862  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:23:59.588864  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:59.588889  507417 cri.go:89] found id: ""
	I1213 11:23:59.588897  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:23:59.588981  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:23:59.592969  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:23:59.593101  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:23:59.619522  507417 cri.go:89] found id: ""
	I1213 11:23:59.619548  507417 logs.go:282] 0 containers: []
	W1213 11:23:59.619567  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:23:59.619574  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:23:59.619668  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:23:59.644852  507417 cri.go:89] found id: ""
	I1213 11:23:59.644876  507417 logs.go:282] 0 containers: []
	W1213 11:23:59.644885  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:23:59.644901  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:23:59.644912  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:23:59.706367  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:23:59.706402  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:23:59.723202  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:23:59.723231  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:23:59.795815  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:23:59.795837  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:23:59.795850  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:23:59.828550  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:23:59.828584  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:23:59.862141  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:23:59.862179  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:23:59.889269  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:23:59.889297  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:23:59.920908  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:23:59.920941  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:23:59.951049  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:23:59.951121  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:02.493882  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:02.504522  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:24:02.504591  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:24:02.537958  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:02.537980  507417 cri.go:89] found id: ""
	I1213 11:24:02.537989  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:24:02.538044  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:02.543295  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:24:02.543365  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:24:02.568328  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:02.568354  507417 cri.go:89] found id: ""
	I1213 11:24:02.568364  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:24:02.568421  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:02.572423  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:24:02.572496  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:24:02.597160  507417 cri.go:89] found id: ""
	I1213 11:24:02.597186  507417 logs.go:282] 0 containers: []
	W1213 11:24:02.597196  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:24:02.597202  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:24:02.597264  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:24:02.625003  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:02.625026  507417 cri.go:89] found id: ""
	I1213 11:24:02.625035  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:24:02.625092  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:02.629076  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:24:02.629155  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:24:02.655076  507417 cri.go:89] found id: ""
	I1213 11:24:02.655101  507417 logs.go:282] 0 containers: []
	W1213 11:24:02.655110  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:24:02.655116  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:24:02.655185  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:24:02.687657  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:02.687682  507417 cri.go:89] found id: ""
	I1213 11:24:02.687690  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:24:02.687745  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:02.691783  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:24:02.691859  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:24:02.716814  507417 cri.go:89] found id: ""
	I1213 11:24:02.716839  507417 logs.go:282] 0 containers: []
	W1213 11:24:02.716848  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:24:02.716854  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:24:02.716933  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:24:02.742494  507417 cri.go:89] found id: ""
	I1213 11:24:02.742521  507417 logs.go:282] 0 containers: []
	W1213 11:24:02.742531  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:24:02.742575  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:24:02.742594  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:24:02.801792  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:24:02.801841  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:24:02.818897  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:24:02.818930  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:24:02.884293  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:24:02.884315  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:24:02.884328  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:02.927365  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:24:02.927444  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:02.969266  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:24:02.969349  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:24:03.008832  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:24:03.008879  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:03.042605  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:24:03.042636  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:03.076908  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:24:03.076939  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:24:05.612402  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:05.623096  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:24:05.623169  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:24:05.648028  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:05.648050  507417 cri.go:89] found id: ""
	I1213 11:24:05.648058  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:24:05.648115  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:05.652070  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:24:05.652160  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:24:05.676951  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:05.676975  507417 cri.go:89] found id: ""
	I1213 11:24:05.676983  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:24:05.677038  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:05.685930  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:24:05.686001  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:24:05.711471  507417 cri.go:89] found id: ""
	I1213 11:24:05.711496  507417 logs.go:282] 0 containers: []
	W1213 11:24:05.711505  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:24:05.711512  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:24:05.711569  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:24:05.737006  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:05.737030  507417 cri.go:89] found id: ""
	I1213 11:24:05.737039  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:24:05.737095  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:05.740967  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:24:05.741045  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:24:05.765855  507417 cri.go:89] found id: ""
	I1213 11:24:05.765882  507417 logs.go:282] 0 containers: []
	W1213 11:24:05.765891  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:24:05.765898  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:24:05.765957  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:24:05.791543  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:05.791564  507417 cri.go:89] found id: ""
	I1213 11:24:05.791573  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:24:05.791626  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:05.795451  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:24:05.795550  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:24:05.820555  507417 cri.go:89] found id: ""
	I1213 11:24:05.820581  507417 logs.go:282] 0 containers: []
	W1213 11:24:05.820590  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:24:05.820597  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:24:05.820672  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:24:05.846176  507417 cri.go:89] found id: ""
	I1213 11:24:05.846201  507417 logs.go:282] 0 containers: []
	W1213 11:24:05.846209  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:24:05.846245  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:24:05.846261  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:05.893142  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:24:05.893175  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:05.932521  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:24:05.932601  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:24:06.009946  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:24:06.009971  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:24:06.009986  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:06.048169  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:24:06.048204  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:06.078980  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:24:06.079008  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:24:06.109376  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:24:06.109408  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:24:06.137596  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:24:06.137626  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:24:06.196032  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:24:06.196068  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:24:08.714380  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:08.728004  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:24:08.728106  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:24:08.755878  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:08.755901  507417 cri.go:89] found id: ""
	I1213 11:24:08.755909  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:24:08.755985  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:08.760022  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:24:08.760098  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:24:08.786483  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:08.786508  507417 cri.go:89] found id: ""
	I1213 11:24:08.786516  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:24:08.786576  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:08.791338  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:24:08.791411  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:24:08.817889  507417 cri.go:89] found id: ""
	I1213 11:24:08.817918  507417 logs.go:282] 0 containers: []
	W1213 11:24:08.817927  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:24:08.817933  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:24:08.818044  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:24:08.843555  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:08.843579  507417 cri.go:89] found id: ""
	I1213 11:24:08.843588  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:24:08.843665  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:08.847646  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:24:08.847739  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:24:08.873182  507417 cri.go:89] found id: ""
	I1213 11:24:08.873212  507417 logs.go:282] 0 containers: []
	W1213 11:24:08.873223  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:24:08.873230  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:24:08.873313  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:24:08.899645  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:08.899668  507417 cri.go:89] found id: ""
	I1213 11:24:08.899677  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:24:08.899760  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:08.903617  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:24:08.903693  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:24:08.932218  507417 cri.go:89] found id: ""
	I1213 11:24:08.932253  507417 logs.go:282] 0 containers: []
	W1213 11:24:08.932263  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:24:08.932270  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:24:08.932336  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:24:08.965015  507417 cri.go:89] found id: ""
	I1213 11:24:08.965048  507417 logs.go:282] 0 containers: []
	W1213 11:24:08.965058  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:24:08.965072  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:24:08.965084  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:24:09.039485  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:24:09.039505  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:24:09.039517  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:09.073415  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:24:09.073447  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:09.116426  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:24:09.116457  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:09.148699  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:24:09.148734  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:24:09.183832  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:24:09.183860  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:24:09.244360  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:24:09.244396  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:24:09.262350  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:24:09.262378  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:09.292872  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:24:09.292903  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:24:11.830836  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:11.841625  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:24:11.841690  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:24:11.869589  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:11.869609  507417 cri.go:89] found id: ""
	I1213 11:24:11.869618  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:24:11.869676  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:11.874726  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:24:11.874800  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:24:11.902323  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:11.902343  507417 cri.go:89] found id: ""
	I1213 11:24:11.902353  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:24:11.902411  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:11.906681  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:24:11.906807  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:24:11.973031  507417 cri.go:89] found id: ""
	I1213 11:24:11.973054  507417 logs.go:282] 0 containers: []
	W1213 11:24:11.973063  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:24:11.973069  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:24:11.973133  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:24:12.044511  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:12.044531  507417 cri.go:89] found id: ""
	I1213 11:24:12.044539  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:24:12.044594  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:12.049005  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:24:12.049072  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:24:12.081349  507417 cri.go:89] found id: ""
	I1213 11:24:12.081371  507417 logs.go:282] 0 containers: []
	W1213 11:24:12.081379  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:24:12.081385  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:24:12.081442  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:24:12.118809  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:12.118828  507417 cri.go:89] found id: ""
	I1213 11:24:12.118836  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:24:12.118890  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:12.123182  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:24:12.123305  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:24:12.156080  507417 cri.go:89] found id: ""
	I1213 11:24:12.156101  507417 logs.go:282] 0 containers: []
	W1213 11:24:12.156110  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:24:12.156117  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:24:12.156180  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:24:12.190372  507417 cri.go:89] found id: ""
	I1213 11:24:12.190449  507417 logs.go:282] 0 containers: []
	W1213 11:24:12.190471  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:24:12.190510  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:24:12.190540  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:24:12.275557  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:24:12.275573  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:24:12.275585  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:12.314870  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:24:12.320626  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:12.357322  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:24:12.357352  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:24:12.390825  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:24:12.390863  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:24:12.436250  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:24:12.436279  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:24:12.500542  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:24:12.500580  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:12.535397  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:24:12.535432  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:12.569269  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:24:12.569298  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:24:15.087226  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:15.098521  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:24:15.098596  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:24:15.125996  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:15.126021  507417 cri.go:89] found id: ""
	I1213 11:24:15.126030  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:24:15.126088  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:15.130042  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:24:15.130117  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:24:15.156306  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:15.156329  507417 cri.go:89] found id: ""
	I1213 11:24:15.156338  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:24:15.156396  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:15.160508  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:24:15.160587  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:24:15.186069  507417 cri.go:89] found id: ""
	I1213 11:24:15.186096  507417 logs.go:282] 0 containers: []
	W1213 11:24:15.186106  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:24:15.186112  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:24:15.186171  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:24:15.212013  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:15.212078  507417 cri.go:89] found id: ""
	I1213 11:24:15.212095  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:24:15.212165  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:15.216368  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:24:15.216442  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:24:15.244421  507417 cri.go:89] found id: ""
	I1213 11:24:15.244448  507417 logs.go:282] 0 containers: []
	W1213 11:24:15.244456  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:24:15.244463  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:24:15.244523  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:24:15.273889  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:15.273911  507417 cri.go:89] found id: ""
	I1213 11:24:15.273920  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:24:15.274021  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:15.278064  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:24:15.278154  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:24:15.302745  507417 cri.go:89] found id: ""
	I1213 11:24:15.302772  507417 logs.go:282] 0 containers: []
	W1213 11:24:15.302781  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:24:15.302786  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:24:15.302849  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:24:15.329601  507417 cri.go:89] found id: ""
	I1213 11:24:15.329625  507417 logs.go:282] 0 containers: []
	W1213 11:24:15.329633  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:24:15.329646  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:24:15.329664  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:15.365444  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:24:15.365475  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:15.393309  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:24:15.393338  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:24:15.422804  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:24:15.422838  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:24:15.452015  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:24:15.452048  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:15.490497  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:24:15.490530  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:24:15.552206  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:24:15.552245  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:24:15.569634  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:24:15.569665  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:24:15.640003  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:24:15.640022  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:24:15.640037  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:18.195030  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:18.205613  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:24:18.205686  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:24:18.230898  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:18.230921  507417 cri.go:89] found id: ""
	I1213 11:24:18.230931  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:24:18.230988  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:18.234919  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:24:18.234999  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:24:18.260315  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:18.260389  507417 cri.go:89] found id: ""
	I1213 11:24:18.260405  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:24:18.260475  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:18.264881  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:24:18.264982  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:24:18.290613  507417 cri.go:89] found id: ""
	I1213 11:24:18.290647  507417 logs.go:282] 0 containers: []
	W1213 11:24:18.290657  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:24:18.290681  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:24:18.290797  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:24:18.316457  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:18.316481  507417 cri.go:89] found id: ""
	I1213 11:24:18.316490  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:24:18.316558  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:18.320602  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:24:18.320675  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:24:18.346352  507417 cri.go:89] found id: ""
	I1213 11:24:18.346378  507417 logs.go:282] 0 containers: []
	W1213 11:24:18.346388  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:24:18.346394  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:24:18.346457  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:24:18.379227  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:18.379251  507417 cri.go:89] found id: ""
	I1213 11:24:18.379259  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:24:18.379318  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:18.383263  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:24:18.383365  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:24:18.407483  507417 cri.go:89] found id: ""
	I1213 11:24:18.407508  507417 logs.go:282] 0 containers: []
	W1213 11:24:18.407517  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:24:18.407523  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:24:18.407583  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:24:18.432223  507417 cri.go:89] found id: ""
	I1213 11:24:18.432250  507417 logs.go:282] 0 containers: []
	W1213 11:24:18.432259  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:24:18.432297  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:24:18.432315  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:24:18.490145  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:24:18.490180  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:24:18.507254  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:24:18.507333  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:18.541439  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:24:18.541469  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:18.574318  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:24:18.574347  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:24:18.605590  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:24:18.605622  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:24:18.691174  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:24:18.691196  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:24:18.691216  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:18.738230  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:24:18.738260  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:18.773749  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:24:18.773779  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:24:21.303364  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:21.314308  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:24:21.314385  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:24:21.339910  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:21.339933  507417 cri.go:89] found id: ""
	I1213 11:24:21.339942  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:24:21.340002  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:21.343785  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:24:21.343867  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:24:21.368634  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:21.368657  507417 cri.go:89] found id: ""
	I1213 11:24:21.368666  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:24:21.368725  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:21.372791  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:24:21.372871  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:24:21.402006  507417 cri.go:89] found id: ""
	I1213 11:24:21.402029  507417 logs.go:282] 0 containers: []
	W1213 11:24:21.402038  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:24:21.402044  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:24:21.402110  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:24:21.427682  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:21.427754  507417 cri.go:89] found id: ""
	I1213 11:24:21.427776  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:24:21.427850  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:21.431823  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:24:21.431890  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:24:21.457102  507417 cri.go:89] found id: ""
	I1213 11:24:21.457128  507417 logs.go:282] 0 containers: []
	W1213 11:24:21.457137  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:24:21.457146  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:24:21.457203  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:24:21.483578  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:21.483600  507417 cri.go:89] found id: ""
	I1213 11:24:21.483609  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:24:21.483688  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:21.487611  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:24:21.487683  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:24:21.513263  507417 cri.go:89] found id: ""
	I1213 11:24:21.513298  507417 logs.go:282] 0 containers: []
	W1213 11:24:21.513307  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:24:21.513330  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:24:21.513412  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:24:21.541032  507417 cri.go:89] found id: ""
	I1213 11:24:21.541057  507417 logs.go:282] 0 containers: []
	W1213 11:24:21.541065  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:24:21.541080  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:24:21.541091  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:24:21.600327  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:24:21.600363  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:24:21.617363  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:24:21.617394  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:24:21.708677  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:24:21.708706  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:24:21.708720  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:21.742531  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:24:21.742561  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:21.774222  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:24:21.774257  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:24:21.805096  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:24:21.805133  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:21.840501  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:24:21.840536  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:21.873256  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:24:21.873286  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:24:24.405942  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:24.416330  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:24:24.416405  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:24:24.445605  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:24.445626  507417 cri.go:89] found id: ""
	I1213 11:24:24.445635  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:24:24.445692  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:24.449505  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:24:24.449582  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:24:24.474440  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:24.474463  507417 cri.go:89] found id: ""
	I1213 11:24:24.474471  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:24:24.474527  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:24.478342  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:24:24.478420  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:24:24.505681  507417 cri.go:89] found id: ""
	I1213 11:24:24.505706  507417 logs.go:282] 0 containers: []
	W1213 11:24:24.505715  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:24:24.505722  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:24:24.505800  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:24:24.530603  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:24.530624  507417 cri.go:89] found id: ""
	I1213 11:24:24.530635  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:24:24.530752  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:24.534556  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:24:24.534666  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:24:24.560005  507417 cri.go:89] found id: ""
	I1213 11:24:24.560030  507417 logs.go:282] 0 containers: []
	W1213 11:24:24.560039  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:24:24.560046  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:24:24.560120  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:24:24.585912  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:24.585934  507417 cri.go:89] found id: ""
	I1213 11:24:24.585943  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:24:24.586029  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:24.589829  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:24:24.589916  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:24:24.615854  507417 cri.go:89] found id: ""
	I1213 11:24:24.615882  507417 logs.go:282] 0 containers: []
	W1213 11:24:24.615891  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:24:24.615897  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:24:24.615956  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:24:24.640637  507417 cri.go:89] found id: ""
	I1213 11:24:24.640662  507417 logs.go:282] 0 containers: []
	W1213 11:24:24.640671  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:24:24.640685  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:24:24.640697  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:24.679644  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:24:24.679677  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:24.749577  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:24:24.749672  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:24:24.807053  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:24:24.807138  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:24:24.878337  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:24:24.878414  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:24:24.950275  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:24:24.950318  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:24:24.971739  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:24:24.971771  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:24:25.069530  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:24:25.069563  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:24:25.069605  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:25.117660  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:24:25.117696  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:27.678996  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:27.692354  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:24:27.692432  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:24:27.720345  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:27.720372  507417 cri.go:89] found id: ""
	I1213 11:24:27.720382  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:24:27.720441  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:27.724473  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:24:27.724546  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:24:27.751233  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:27.751259  507417 cri.go:89] found id: ""
	I1213 11:24:27.751267  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:24:27.751323  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:27.755197  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:24:27.755286  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:24:27.785604  507417 cri.go:89] found id: ""
	I1213 11:24:27.785629  507417 logs.go:282] 0 containers: []
	W1213 11:24:27.785638  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:24:27.785645  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:24:27.785703  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:24:27.811938  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:27.811961  507417 cri.go:89] found id: ""
	I1213 11:24:27.811970  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:24:27.812028  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:27.815874  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:24:27.815950  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:24:27.842401  507417 cri.go:89] found id: ""
	I1213 11:24:27.842427  507417 logs.go:282] 0 containers: []
	W1213 11:24:27.842435  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:24:27.842442  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:24:27.842503  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:24:27.871562  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:27.871591  507417 cri.go:89] found id: ""
	I1213 11:24:27.871601  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:24:27.871656  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:27.875372  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:24:27.875447  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:24:27.900068  507417 cri.go:89] found id: ""
	I1213 11:24:27.900143  507417 logs.go:282] 0 containers: []
	W1213 11:24:27.900158  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:24:27.900166  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:24:27.900232  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:24:27.941739  507417 cri.go:89] found id: ""
	I1213 11:24:27.941764  507417 logs.go:282] 0 containers: []
	W1213 11:24:27.941773  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:24:27.941795  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:24:27.941807  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:24:27.959953  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:24:27.959987  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:24:28.037365  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:24:28.037388  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:24:28.037406  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:28.075182  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:24:28.075214  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:28.114903  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:24:28.114937  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:24:28.145503  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:24:28.145536  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:24:28.208540  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:24:28.208578  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:28.246080  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:24:28.246112  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:28.285466  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:24:28.285502  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:24:30.815677  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:30.825994  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:24:30.826062  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:24:30.851223  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:30.851247  507417 cri.go:89] found id: ""
	I1213 11:24:30.851255  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:24:30.851312  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:30.855171  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:24:30.855245  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:24:30.882138  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:30.882162  507417 cri.go:89] found id: ""
	I1213 11:24:30.882171  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:24:30.882226  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:30.885899  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:24:30.885981  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:24:30.912373  507417 cri.go:89] found id: ""
	I1213 11:24:30.912399  507417 logs.go:282] 0 containers: []
	W1213 11:24:30.912408  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:24:30.912415  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:24:30.912472  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:24:30.942883  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:30.942905  507417 cri.go:89] found id: ""
	I1213 11:24:30.942915  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:24:30.943001  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:30.947274  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:24:30.947352  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:24:30.974197  507417 cri.go:89] found id: ""
	I1213 11:24:30.974226  507417 logs.go:282] 0 containers: []
	W1213 11:24:30.974235  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:24:30.974241  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:24:30.974300  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:24:31.006623  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:31.006647  507417 cri.go:89] found id: ""
	I1213 11:24:31.006656  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:24:31.006787  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:31.011581  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:24:31.011657  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:24:31.039092  507417 cri.go:89] found id: ""
	I1213 11:24:31.039116  507417 logs.go:282] 0 containers: []
	W1213 11:24:31.039124  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:24:31.039130  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:24:31.039190  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:24:31.065901  507417 cri.go:89] found id: ""
	I1213 11:24:31.065983  507417 logs.go:282] 0 containers: []
	W1213 11:24:31.066006  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:24:31.066046  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:24:31.066076  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:31.100896  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:24:31.100932  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:31.135569  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:24:31.135607  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:24:31.169038  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:24:31.169079  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:24:31.232963  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:24:31.233000  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:24:31.252039  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:24:31.252079  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:24:31.322735  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:24:31.322764  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:24:31.322778  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:31.365252  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:24:31.365286  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:31.398928  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:24:31.399003  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:24:33.934853  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:33.947919  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:24:33.947999  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:24:33.978928  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:33.978952  507417 cri.go:89] found id: ""
	I1213 11:24:33.978961  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:24:33.979022  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:33.982855  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:24:33.982929  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:24:34.017374  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:34.017398  507417 cri.go:89] found id: ""
	I1213 11:24:34.017407  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:24:34.017464  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:34.022188  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:24:34.022268  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:24:34.060387  507417 cri.go:89] found id: ""
	I1213 11:24:34.060414  507417 logs.go:282] 0 containers: []
	W1213 11:24:34.060424  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:24:34.060431  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:24:34.060496  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:24:34.087765  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:34.087787  507417 cri.go:89] found id: ""
	I1213 11:24:34.087796  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:24:34.087855  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:34.091873  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:24:34.091947  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:24:34.118723  507417 cri.go:89] found id: ""
	I1213 11:24:34.118745  507417 logs.go:282] 0 containers: []
	W1213 11:24:34.118754  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:24:34.118760  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:24:34.118820  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:24:34.145754  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:34.145795  507417 cri.go:89] found id: ""
	I1213 11:24:34.145804  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:24:34.145888  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:34.151048  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:24:34.151146  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:24:34.176206  507417 cri.go:89] found id: ""
	I1213 11:24:34.176283  507417 logs.go:282] 0 containers: []
	W1213 11:24:34.176309  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:24:34.176321  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:24:34.176384  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:24:34.206553  507417 cri.go:89] found id: ""
	I1213 11:24:34.206580  507417 logs.go:282] 0 containers: []
	W1213 11:24:34.206589  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:24:34.206603  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:24:34.206614  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:24:34.223627  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:24:34.223658  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:34.256451  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:24:34.256484  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:34.297620  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:24:34.297655  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:34.327391  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:24:34.327422  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:24:34.358847  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:24:34.358889  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:24:34.391119  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:24:34.391152  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:24:34.449827  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:24:34.449866  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:24:34.521749  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:24:34.521777  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:24:34.521805  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:37.055109  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:37.066441  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:24:37.066511  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:24:37.093602  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:37.093631  507417 cri.go:89] found id: ""
	I1213 11:24:37.093641  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:24:37.093698  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:37.097598  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:24:37.097669  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:24:37.123083  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:37.123106  507417 cri.go:89] found id: ""
	I1213 11:24:37.123114  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:24:37.123171  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:37.126952  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:24:37.127028  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:24:37.152671  507417 cri.go:89] found id: ""
	I1213 11:24:37.152696  507417 logs.go:282] 0 containers: []
	W1213 11:24:37.152704  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:24:37.152710  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:24:37.152816  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:24:37.179630  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:37.179705  507417 cri.go:89] found id: ""
	I1213 11:24:37.179729  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:24:37.179815  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:37.183727  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:24:37.183799  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:24:37.213800  507417 cri.go:89] found id: ""
	I1213 11:24:37.213822  507417 logs.go:282] 0 containers: []
	W1213 11:24:37.213831  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:24:37.213838  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:24:37.213896  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:24:37.239187  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:37.239218  507417 cri.go:89] found id: ""
	I1213 11:24:37.239227  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:24:37.239291  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:37.243534  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:24:37.243610  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:24:37.271030  507417 cri.go:89] found id: ""
	I1213 11:24:37.271067  507417 logs.go:282] 0 containers: []
	W1213 11:24:37.271077  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:24:37.271084  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:24:37.271160  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:24:37.297676  507417 cri.go:89] found id: ""
	I1213 11:24:37.297707  507417 logs.go:282] 0 containers: []
	W1213 11:24:37.297716  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:24:37.297731  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:24:37.297742  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:37.333068  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:24:37.333100  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:37.365963  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:24:37.365995  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:37.399862  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:24:37.399893  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:37.434819  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:24:37.434848  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:24:37.494036  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:24:37.494070  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:24:37.511332  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:24:37.511365  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:24:37.584422  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:24:37.584441  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:24:37.584465  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:24:37.614166  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:24:37.614199  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:24:40.143860  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:40.155784  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:24:40.155868  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:24:40.185654  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:40.185677  507417 cri.go:89] found id: ""
	I1213 11:24:40.185687  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:24:40.185750  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:40.190440  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:24:40.190528  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:24:40.218552  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:40.218574  507417 cri.go:89] found id: ""
	I1213 11:24:40.218584  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:24:40.218648  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:40.222822  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:24:40.222898  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:24:40.255267  507417 cri.go:89] found id: ""
	I1213 11:24:40.255294  507417 logs.go:282] 0 containers: []
	W1213 11:24:40.255303  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:24:40.255311  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:24:40.255373  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:24:40.285526  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:40.285549  507417 cri.go:89] found id: ""
	I1213 11:24:40.285558  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:24:40.285615  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:40.289665  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:24:40.289739  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:24:40.314757  507417 cri.go:89] found id: ""
	I1213 11:24:40.314781  507417 logs.go:282] 0 containers: []
	W1213 11:24:40.314790  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:24:40.314796  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:24:40.314856  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:24:40.344325  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:40.344351  507417 cri.go:89] found id: ""
	I1213 11:24:40.344359  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:24:40.344413  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:40.348285  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:24:40.348376  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:24:40.373437  507417 cri.go:89] found id: ""
	I1213 11:24:40.373510  507417 logs.go:282] 0 containers: []
	W1213 11:24:40.373533  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:24:40.373550  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:24:40.373637  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:24:40.402567  507417 cri.go:89] found id: ""
	I1213 11:24:40.402640  507417 logs.go:282] 0 containers: []
	W1213 11:24:40.402666  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:24:40.402739  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:24:40.402760  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:24:40.461274  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:24:40.461310  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:40.489052  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:24:40.489080  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:24:40.505891  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:24:40.505922  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:24:40.569074  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:24:40.569141  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:24:40.569163  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:40.605033  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:24:40.605066  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:40.637693  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:24:40.637723  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:40.677494  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:24:40.677527  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:24:40.717191  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:24:40.717233  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:24:43.254826  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:43.265660  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:24:43.265742  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:24:43.293627  507417 cri.go:89] found id: "d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:43.293649  507417 cri.go:89] found id: ""
	I1213 11:24:43.293658  507417 logs.go:282] 1 containers: [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee]
	I1213 11:24:43.293729  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:43.297774  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:24:43.297848  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:24:43.323133  507417 cri.go:89] found id: "ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:43.323156  507417 cri.go:89] found id: ""
	I1213 11:24:43.323165  507417 logs.go:282] 1 containers: [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9]
	I1213 11:24:43.323223  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:43.327242  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:24:43.327320  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:24:43.354973  507417 cri.go:89] found id: ""
	I1213 11:24:43.355002  507417 logs.go:282] 0 containers: []
	W1213 11:24:43.355014  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:24:43.355021  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:24:43.355079  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:24:43.379902  507417 cri.go:89] found id: "3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:43.379924  507417 cri.go:89] found id: ""
	I1213 11:24:43.379932  507417 logs.go:282] 1 containers: [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d]
	I1213 11:24:43.379992  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:43.383876  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:24:43.383950  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:24:43.407762  507417 cri.go:89] found id: ""
	I1213 11:24:43.407839  507417 logs.go:282] 0 containers: []
	W1213 11:24:43.407863  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:24:43.407883  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:24:43.407953  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:24:43.433344  507417 cri.go:89] found id: "da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:43.433364  507417 cri.go:89] found id: ""
	I1213 11:24:43.433373  507417 logs.go:282] 1 containers: [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a]
	I1213 11:24:43.433426  507417 ssh_runner.go:195] Run: which crictl
	I1213 11:24:43.437156  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:24:43.437228  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:24:43.465771  507417 cri.go:89] found id: ""
	I1213 11:24:43.465796  507417 logs.go:282] 0 containers: []
	W1213 11:24:43.465805  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:24:43.465811  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:24:43.465869  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:24:43.489963  507417 cri.go:89] found id: ""
	I1213 11:24:43.489988  507417 logs.go:282] 0 containers: []
	W1213 11:24:43.489998  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:24:43.490011  507417 logs.go:123] Gathering logs for kube-controller-manager [da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a] ...
	I1213 11:24:43.490024  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a"
	I1213 11:24:43.518328  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:24:43.518354  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:24:43.581559  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:24:43.581578  507417 logs.go:123] Gathering logs for kube-apiserver [d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee] ...
	I1213 11:24:43.581592  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee"
	I1213 11:24:43.619702  507417 logs.go:123] Gathering logs for etcd [ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9] ...
	I1213 11:24:43.619737  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9"
	I1213 11:24:43.651442  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:24:43.651472  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:24:43.683885  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:24:43.683926  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:24:43.726903  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:24:43.726933  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:24:43.797273  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:24:43.797316  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:24:43.815302  507417 logs.go:123] Gathering logs for kube-scheduler [3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d] ...
	I1213 11:24:43.815333  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d"
	I1213 11:24:46.347539  507417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:46.359414  507417 kubeadm.go:602] duration metric: took 4m4.520199713s to restartPrimaryControlPlane
	W1213 11:24:46.359499  507417 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 11:24:46.359589  507417 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 11:24:46.855270  507417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:24:46.868869  507417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:24:46.876935  507417 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:24:46.877003  507417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:24:46.884712  507417 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:24:46.884729  507417 kubeadm.go:158] found existing configuration files:
	
	I1213 11:24:46.884782  507417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:24:46.892716  507417 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:24:46.892786  507417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:24:46.907302  507417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:24:46.916621  507417 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:24:46.916687  507417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:24:46.924132  507417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:24:46.931876  507417 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:24:46.931938  507417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:24:46.939692  507417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:24:46.948125  507417 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:24:46.948194  507417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:24:46.955704  507417 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:24:47.073073  507417 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:24:47.073497  507417 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:24:47.143619  507417 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:28:56.079543  507417 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 11:28:56.079595  507417 kubeadm.go:319] 
	I1213 11:28:56.079674  507417 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 11:28:56.084444  507417 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:28:56.084509  507417 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:28:56.084620  507417 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:28:56.084696  507417 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:28:56.084736  507417 kubeadm.go:319] OS: Linux
	I1213 11:28:56.084808  507417 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:28:56.084862  507417 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:28:56.084934  507417 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:28:56.085002  507417 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:28:56.085059  507417 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:28:56.085127  507417 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:28:56.085196  507417 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:28:56.085249  507417 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:28:56.085311  507417 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:28:56.085397  507417 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:28:56.085501  507417 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:28:56.085622  507417 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:28:56.085708  507417 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:28:56.088631  507417 out.go:252]   - Generating certificates and keys ...
	I1213 11:28:56.088729  507417 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:28:56.088801  507417 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:28:56.088883  507417 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 11:28:56.088959  507417 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 11:28:56.089037  507417 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 11:28:56.089095  507417 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 11:28:56.089161  507417 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 11:28:56.089232  507417 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 11:28:56.089312  507417 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 11:28:56.089388  507417 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 11:28:56.089429  507417 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 11:28:56.089487  507417 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:28:56.089540  507417 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:28:56.089600  507417 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:28:56.089662  507417 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:28:56.089729  507417 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:28:56.089787  507417 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:28:56.089873  507417 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:28:56.089944  507417 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:28:56.092872  507417 out.go:252]   - Booting up control plane ...
	I1213 11:28:56.093028  507417 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:28:56.093136  507417 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:28:56.093209  507417 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:28:56.093321  507417 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:28:56.093418  507417 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:28:56.093525  507417 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:28:56.093612  507417 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:28:56.093659  507417 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:28:56.093796  507417 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:28:56.093903  507417 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:28:56.093970  507417 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000204784s
	I1213 11:28:56.093979  507417 kubeadm.go:319] 
	I1213 11:28:56.094035  507417 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:28:56.094070  507417 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:28:56.094177  507417 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:28:56.094185  507417 kubeadm.go:319] 
	I1213 11:28:56.094289  507417 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:28:56.094324  507417 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:28:56.094358  507417 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1213 11:28:56.094516  507417 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000204784s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000204784s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 11:28:56.094600  507417 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 11:28:56.094735  507417 kubeadm.go:319] 
	I1213 11:28:56.505918  507417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:28:56.519561  507417 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:28:56.519625  507417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:28:56.527672  507417 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:28:56.527696  507417 kubeadm.go:158] found existing configuration files:
	
	I1213 11:28:56.527745  507417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:28:56.535775  507417 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:28:56.535866  507417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:28:56.544221  507417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:28:56.551995  507417 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:28:56.552087  507417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:28:56.559923  507417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:28:56.567848  507417 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:28:56.567960  507417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:28:56.575862  507417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:28:56.583920  507417 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:28:56.584003  507417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:28:56.591709  507417 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:28:56.631826  507417 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:28:56.631891  507417 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:28:56.699068  507417 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:28:56.699145  507417 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:28:56.699188  507417 kubeadm.go:319] OS: Linux
	I1213 11:28:56.699236  507417 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:28:56.699291  507417 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:28:56.699342  507417 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:28:56.699394  507417 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:28:56.699445  507417 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:28:56.699497  507417 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:28:56.699547  507417 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:28:56.699599  507417 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:28:56.699649  507417 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:28:56.766267  507417 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:28:56.766380  507417 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:28:56.766488  507417 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:28:56.779103  507417 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:28:56.784300  507417 out.go:252]   - Generating certificates and keys ...
	I1213 11:28:56.784451  507417 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:28:56.784574  507417 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:28:56.784698  507417 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 11:28:56.784805  507417 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 11:28:56.784929  507417 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 11:28:56.785021  507417 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 11:28:56.785138  507417 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 11:28:56.785248  507417 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 11:28:56.785384  507417 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 11:28:56.785507  507417 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 11:28:56.785572  507417 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 11:28:56.785659  507417 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:28:56.895988  507417 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:28:56.960551  507417 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:28:57.137192  507417 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:28:57.492893  507417 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:28:57.727088  507417 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:28:57.727205  507417 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:28:57.727288  507417 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:28:57.730499  507417 out.go:252]   - Booting up control plane ...
	I1213 11:28:57.730610  507417 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:28:57.730707  507417 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:28:57.730781  507417 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:28:57.751039  507417 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:28:57.752152  507417 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:28:57.763865  507417 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:28:57.763973  507417 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:28:57.764017  507417 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:28:57.948262  507417 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:28:57.948397  507417 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:32:57.952266  507417 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000862989s
	I1213 11:32:57.952302  507417 kubeadm.go:319] 
	I1213 11:32:57.952360  507417 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:32:57.952397  507417 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:32:57.952507  507417 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:32:57.952517  507417 kubeadm.go:319] 
	I1213 11:32:57.952622  507417 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:32:57.952658  507417 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:32:57.952693  507417 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:32:57.952701  507417 kubeadm.go:319] 
	I1213 11:32:57.952937  507417 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:32:57.953347  507417 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:32:57.953457  507417 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:32:57.953706  507417 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:32:57.953716  507417 kubeadm.go:319] 
	I1213 11:32:57.953784  507417 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 11:32:57.953840  507417 kubeadm.go:403] duration metric: took 12m16.200368432s to StartCluster
	I1213 11:32:57.953876  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:32:57.953942  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:32:57.999170  507417 cri.go:89] found id: ""
	I1213 11:32:57.999197  507417 logs.go:282] 0 containers: []
	W1213 11:32:57.999207  507417 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:32:57.999214  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:32:57.999280  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:32:58.048694  507417 cri.go:89] found id: ""
	I1213 11:32:58.048721  507417 logs.go:282] 0 containers: []
	W1213 11:32:58.048730  507417 logs.go:284] No container was found matching "etcd"
	I1213 11:32:58.048737  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:32:58.048799  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:32:58.076669  507417 cri.go:89] found id: ""
	I1213 11:32:58.076696  507417 logs.go:282] 0 containers: []
	W1213 11:32:58.076705  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:32:58.076711  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:32:58.076768  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:32:58.109796  507417 cri.go:89] found id: ""
	I1213 11:32:58.109819  507417 logs.go:282] 0 containers: []
	W1213 11:32:58.109828  507417 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:32:58.109835  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:32:58.109893  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:32:58.137392  507417 cri.go:89] found id: ""
	I1213 11:32:58.137414  507417 logs.go:282] 0 containers: []
	W1213 11:32:58.137423  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:32:58.137429  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:32:58.137484  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:32:58.180466  507417 cri.go:89] found id: ""
	I1213 11:32:58.180489  507417 logs.go:282] 0 containers: []
	W1213 11:32:58.180499  507417 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:32:58.180505  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:32:58.180574  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:32:58.240938  507417 cri.go:89] found id: ""
	I1213 11:32:58.240968  507417 logs.go:282] 0 containers: []
	W1213 11:32:58.240978  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:32:58.240985  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:32:58.241042  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:32:58.334900  507417 cri.go:89] found id: ""
	I1213 11:32:58.334930  507417 logs.go:282] 0 containers: []
	W1213 11:32:58.334939  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:32:58.334950  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:32:58.334961  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:32:58.448923  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:32:58.448961  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:32:58.477807  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:32:58.477835  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:32:58.589265  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:32:58.589295  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:32:58.589307  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:32:58.661645  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:32:58.661692  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 11:32:58.728616  507417 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000862989s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 11:32:58.728658  507417 out.go:285] * 
	* 
	W1213 11:32:58.728711  507417 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000862989s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000862989s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:32:58.728729  507417 out.go:285] * 
	* 
	W1213 11:32:58.731642  507417 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:32:58.743673  507417 out.go:203] 
	W1213 11:32:58.746746  507417 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000862989s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000862989s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:32:58.746885  507417 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 11:32:58.746946  507417 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 11:32:58.751909  507417 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-415704 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-415704 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-415704 version --output=json: exit status 1 (94.619157ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-13 11:33:00.54288916 +0000 UTC m=+4829.927309084
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-415704
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-415704:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e5412c33d7b1828d38c2b04f861f835e93b8d598b3555f88ba474e3e7da17d63",
	        "Created": "2025-12-13T11:19:54.111080282Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 507541,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:20:23.626929177Z",
	            "FinishedAt": "2025-12-13T11:20:22.569507564Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/e5412c33d7b1828d38c2b04f861f835e93b8d598b3555f88ba474e3e7da17d63/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e5412c33d7b1828d38c2b04f861f835e93b8d598b3555f88ba474e3e7da17d63/hostname",
	        "HostsPath": "/var/lib/docker/containers/e5412c33d7b1828d38c2b04f861f835e93b8d598b3555f88ba474e3e7da17d63/hosts",
	        "LogPath": "/var/lib/docker/containers/e5412c33d7b1828d38c2b04f861f835e93b8d598b3555f88ba474e3e7da17d63/e5412c33d7b1828d38c2b04f861f835e93b8d598b3555f88ba474e3e7da17d63-json.log",
	        "Name": "/kubernetes-upgrade-415704",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-415704:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-415704",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e5412c33d7b1828d38c2b04f861f835e93b8d598b3555f88ba474e3e7da17d63",
	                "LowerDir": "/var/lib/docker/overlay2/7a359129d21cdc66b7f33f3e5c061e5741389771edaad28291677782bc485ba2-init/diff:/var/lib/docker/overlay2/5d0ca86320f2c06765995d644a73af30cd6ddb36f5c1a2a6b1ebb24af53cdabd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7a359129d21cdc66b7f33f3e5c061e5741389771edaad28291677782bc485ba2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7a359129d21cdc66b7f33f3e5c061e5741389771edaad28291677782bc485ba2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7a359129d21cdc66b7f33f3e5c061e5741389771edaad28291677782bc485ba2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-415704",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-415704/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-415704",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-415704",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-415704",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "debf4a659d0c8660213649994e5f076dbf08a2fe621c3d81b2ff1f9c73dc20d5",
	            "SandboxKey": "/var/run/docker/netns/debf4a659d0c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33350"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33351"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33354"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33352"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33353"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-415704": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:2a:b6:81:2a:2d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d68979157457b41c0888339b9caa112c8dd6edc7d5eddfda8f33f0c4704b056a",
	                    "EndpointID": "3a1be271e5ad7371a8f50b3c26c87b6e3f7418bca60d95e4f0cd86656377b2f8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-415704",
	                        "e5412c33d7b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-415704 -n kubernetes-upgrade-415704
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-415704 -n kubernetes-upgrade-415704: exit status 2 (383.097467ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-415704 logs -n 25
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                       │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-270721 sudo systemctl status kubelet --all --full --no-pager                                           │ cilium-270721            │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ ssh     │ -p cilium-270721 sudo systemctl cat kubelet --no-pager                                                           │ cilium-270721            │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ ssh     │ -p cilium-270721 sudo journalctl -xeu kubelet --all --full --no-pager                                            │ cilium-270721            │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ ssh     │ -p cilium-270721 sudo cat /etc/kubernetes/kubelet.conf                                                           │ cilium-270721            │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ ssh     │ -p cilium-270721 sudo cat /var/lib/kubelet/config.yaml                                                           │ cilium-270721            │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ ssh     │ -p cilium-270721 sudo systemctl status docker --all --full --no-pager                                            │ cilium-270721            │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ ssh     │ -p cilium-270721 sudo systemctl cat docker --no-pager                                                            │ cilium-270721            │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ ssh     │ -p cilium-270721 sudo cat /etc/docker/daemon.json                                                                │ cilium-270721            │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ ssh     │ -p cilium-270721 sudo docker system info                                                                         │ cilium-270721            │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ ssh     │ -p cilium-270721 sudo systemctl status cri-docker --all --full --no-pager                                        │ cilium-270721            │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ ssh     │ -p cilium-270721 sudo systemctl cat cri-docker --no-pager                                                        │ cilium-270721            │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ ssh     │ -p cilium-270721 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                   │ cilium-270721            │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ ssh     │ -p cilium-270721 sudo cat /usr/lib/systemd/system/cri-docker.service                                             │ cilium-270721            │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ ssh     │ -p cilium-270721 sudo cri-dockerd --version                                                                      │ cilium-270721            │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ ssh     │ -p cilium-270721 sudo systemctl status containerd --all --full --no-pager                                        │ cilium-270721            │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ ssh     │ -p cilium-270721 sudo systemctl cat containerd --no-pager                                                        │ cilium-270721            │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ ssh     │ -p cilium-270721 sudo cat /lib/systemd/system/containerd.service                                                 │ cilium-270721            │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ ssh     │ -p cilium-270721 sudo cat /etc/containerd/config.toml                                                            │ cilium-270721            │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ ssh     │ -p cilium-270721 sudo containerd config dump                                                                     │ cilium-270721            │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ ssh     │ -p cilium-270721 sudo systemctl status crio --all --full --no-pager                                              │ cilium-270721            │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ ssh     │ -p cilium-270721 sudo systemctl cat crio --no-pager                                                              │ cilium-270721            │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ ssh     │ -p cilium-270721 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                    │ cilium-270721            │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ ssh     │ -p cilium-270721 sudo crio config                                                                                │ cilium-270721            │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ delete  │ -p cilium-270721                                                                                                 │ cilium-270721            │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │ 13 Dec 25 11:32 UTC │
	│ start   │ -p force-systemd-env-835611 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd │ force-systemd-env-835611 │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:32:52
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:32:52.823736  551836 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:32:52.823852  551836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:32:52.823863  551836 out.go:374] Setting ErrFile to fd 2...
	I1213 11:32:52.823869  551836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:32:52.824130  551836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 11:32:52.824530  551836 out.go:368] Setting JSON to false
	I1213 11:32:52.825388  551836 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":15326,"bootTime":1765610247,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 11:32:52.825457  551836 start.go:143] virtualization:  
	I1213 11:32:52.828936  551836 out.go:179] * [force-systemd-env-835611] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:32:52.831917  551836 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:32:52.832034  551836 notify.go:221] Checking for updates...
	I1213 11:32:52.837759  551836 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:32:52.840726  551836 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:32:52.843635  551836 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 11:32:52.846560  551836 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:32:52.849595  551836 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1213 11:32:52.853031  551836 config.go:182] Loaded profile config "kubernetes-upgrade-415704": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:32:52.853161  551836 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:32:52.884115  551836 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:32:52.884260  551836 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:32:52.958014  551836 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:32:52.947219708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:32:52.958126  551836 docker.go:319] overlay module found
	I1213 11:32:52.961187  551836 out.go:179] * Using the docker driver based on user configuration
	I1213 11:32:52.964073  551836 start.go:309] selected driver: docker
	I1213 11:32:52.964104  551836 start.go:927] validating driver "docker" against <nil>
	I1213 11:32:52.964120  551836 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:32:52.964805  551836 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:32:53.026033  551836 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:32:53.015545388 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:32:53.026210  551836 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 11:32:53.026447  551836 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 11:32:53.029328  551836 out.go:179] * Using Docker driver with root privileges
	I1213 11:32:53.032230  551836 cni.go:84] Creating CNI manager for ""
	I1213 11:32:53.032312  551836 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:32:53.032328  551836 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 11:32:53.032417  551836 start.go:353] cluster config:
	{Name:force-systemd-env-835611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-835611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:32:53.035632  551836 out.go:179] * Starting "force-systemd-env-835611" primary control-plane node in "force-systemd-env-835611" cluster
	I1213 11:32:53.038563  551836 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 11:32:53.041631  551836 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:32:53.044518  551836 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 11:32:53.044592  551836 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1213 11:32:53.044605  551836 cache.go:65] Caching tarball of preloaded images
	I1213 11:32:53.044614  551836 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:32:53.044722  551836 preload.go:238] Found /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 11:32:53.044734  551836 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1213 11:32:53.044859  551836 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/force-systemd-env-835611/config.json ...
	I1213 11:32:53.044887  551836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/force-systemd-env-835611/config.json: {Name:mkafd7b8252b9a6a2de6691914560c558570b5c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:32:53.065147  551836 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:32:53.065170  551836 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:32:53.065187  551836 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:32:53.065220  551836 start.go:360] acquireMachinesLock for force-systemd-env-835611: {Name:mk8021f4a151e662e8ae164d3acf0db7115165a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:32:53.065359  551836 start.go:364] duration metric: took 117.499µs to acquireMachinesLock for "force-systemd-env-835611"
	I1213 11:32:53.065392  551836 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-835611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-835611 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 11:32:53.065473  551836 start.go:125] createHost starting for "" (driver="docker")
	I1213 11:32:53.068773  551836 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 11:32:53.069054  551836 start.go:159] libmachine.API.Create for "force-systemd-env-835611" (driver="docker")
	I1213 11:32:53.069091  551836 client.go:173] LocalClient.Create starting
	I1213 11:32:53.069179  551836 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem
	I1213 11:32:53.069219  551836 main.go:143] libmachine: Decoding PEM data...
	I1213 11:32:53.069242  551836 main.go:143] libmachine: Parsing certificate...
	I1213 11:32:53.069303  551836 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem
	I1213 11:32:53.069329  551836 main.go:143] libmachine: Decoding PEM data...
	I1213 11:32:53.069345  551836 main.go:143] libmachine: Parsing certificate...
	I1213 11:32:53.069793  551836 cli_runner.go:164] Run: docker network inspect force-systemd-env-835611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 11:32:53.086522  551836 cli_runner.go:211] docker network inspect force-systemd-env-835611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 11:32:53.086640  551836 network_create.go:284] running [docker network inspect force-systemd-env-835611] to gather additional debugging logs...
	I1213 11:32:53.086662  551836 cli_runner.go:164] Run: docker network inspect force-systemd-env-835611
	W1213 11:32:53.105889  551836 cli_runner.go:211] docker network inspect force-systemd-env-835611 returned with exit code 1
	I1213 11:32:53.105925  551836 network_create.go:287] error running [docker network inspect force-systemd-env-835611]: docker network inspect force-systemd-env-835611: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-835611 not found
	I1213 11:32:53.105940  551836 network_create.go:289] output of [docker network inspect force-systemd-env-835611]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-835611 not found
	
	** /stderr **
	I1213 11:32:53.106052  551836 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:32:53.123302  551836 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-381e4ce3c9ab IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:2d:23:57:0e:cc} reservation:<nil>}
	I1213 11:32:53.123707  551836 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bd1082d121b0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7a:42:ce:41:ea:ae} reservation:<nil>}
	I1213 11:32:53.124103  551836 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ebeb7162e340 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:cf:aa:41:ac:19} reservation:<nil>}
	I1213 11:32:53.124398  551836 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d68979157457 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a2:2a:01:e2:af:3d} reservation:<nil>}
	I1213 11:32:53.124860  551836 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a8c590}
	I1213 11:32:53.124883  551836 network_create.go:124] attempt to create docker network force-systemd-env-835611 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1213 11:32:53.124938  551836 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-835611 force-systemd-env-835611
	I1213 11:32:53.186864  551836 network_create.go:108] docker network force-systemd-env-835611 192.168.85.0/24 created
	I1213 11:32:53.186926  551836 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-835611" container
	I1213 11:32:53.187006  551836 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 11:32:53.204106  551836 cli_runner.go:164] Run: docker volume create force-systemd-env-835611 --label name.minikube.sigs.k8s.io=force-systemd-env-835611 --label created_by.minikube.sigs.k8s.io=true
	I1213 11:32:53.223296  551836 oci.go:103] Successfully created a docker volume force-systemd-env-835611
	I1213 11:32:53.223393  551836 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-835611-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-835611 --entrypoint /usr/bin/test -v force-systemd-env-835611:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 11:32:53.753818  551836 oci.go:107] Successfully prepared a docker volume force-systemd-env-835611
	I1213 11:32:53.753892  551836 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 11:32:53.753902  551836 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 11:32:53.753972  551836 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-835611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 11:32:57.801096  551836 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-835611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.047041479s)
	I1213 11:32:57.801136  551836 kic.go:203] duration metric: took 4.047230363s to extract preloaded images to volume ...
	W1213 11:32:57.801272  551836 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 11:32:57.801399  551836 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 11:32:57.952266  507417 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000862989s
	I1213 11:32:57.952302  507417 kubeadm.go:319] 
	I1213 11:32:57.952360  507417 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:32:57.952397  507417 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:32:57.952507  507417 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:32:57.952517  507417 kubeadm.go:319] 
	I1213 11:32:57.952622  507417 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:32:57.952658  507417 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:32:57.952693  507417 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:32:57.952701  507417 kubeadm.go:319] 
	I1213 11:32:57.952937  507417 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:32:57.953347  507417 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:32:57.953457  507417 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:32:57.953706  507417 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:32:57.953716  507417 kubeadm.go:319] 
	I1213 11:32:57.953784  507417 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 11:32:57.953840  507417 kubeadm.go:403] duration metric: took 12m16.200368432s to StartCluster
	I1213 11:32:57.953876  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:32:57.953942  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:32:57.999170  507417 cri.go:89] found id: ""
	I1213 11:32:57.999197  507417 logs.go:282] 0 containers: []
	W1213 11:32:57.999207  507417 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:32:57.999214  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:32:57.999280  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:32:58.048694  507417 cri.go:89] found id: ""
	I1213 11:32:58.048721  507417 logs.go:282] 0 containers: []
	W1213 11:32:58.048730  507417 logs.go:284] No container was found matching "etcd"
	I1213 11:32:58.048737  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:32:58.048799  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:32:58.076669  507417 cri.go:89] found id: ""
	I1213 11:32:58.076696  507417 logs.go:282] 0 containers: []
	W1213 11:32:58.076705  507417 logs.go:284] No container was found matching "coredns"
	I1213 11:32:58.076711  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:32:58.076768  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:32:58.109796  507417 cri.go:89] found id: ""
	I1213 11:32:58.109819  507417 logs.go:282] 0 containers: []
	W1213 11:32:58.109828  507417 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:32:58.109835  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:32:58.109893  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:32:58.137392  507417 cri.go:89] found id: ""
	I1213 11:32:58.137414  507417 logs.go:282] 0 containers: []
	W1213 11:32:58.137423  507417 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:32:58.137429  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:32:58.137484  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:32:58.180466  507417 cri.go:89] found id: ""
	I1213 11:32:58.180489  507417 logs.go:282] 0 containers: []
	W1213 11:32:58.180499  507417 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:32:58.180505  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:32:58.180574  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:32:58.240938  507417 cri.go:89] found id: ""
	I1213 11:32:58.240968  507417 logs.go:282] 0 containers: []
	W1213 11:32:58.240978  507417 logs.go:284] No container was found matching "kindnet"
	I1213 11:32:58.240985  507417 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:32:58.241042  507417 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:32:58.334900  507417 cri.go:89] found id: ""
	I1213 11:32:58.334930  507417 logs.go:282] 0 containers: []
	W1213 11:32:58.334939  507417 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:32:58.334950  507417 logs.go:123] Gathering logs for kubelet ...
	I1213 11:32:58.334961  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:32:58.448923  507417 logs.go:123] Gathering logs for dmesg ...
	I1213 11:32:58.448961  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:32:58.477807  507417 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:32:58.477835  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:32:58.589265  507417 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:32:58.589295  507417 logs.go:123] Gathering logs for containerd ...
	I1213 11:32:58.589307  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:32:58.661645  507417 logs.go:123] Gathering logs for container status ...
	I1213 11:32:58.661692  507417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 11:32:58.728616  507417 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000862989s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 11:32:58.728658  507417 out.go:285] * 
	W1213 11:32:58.728711  507417 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000862989s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:32:58.728729  507417 out.go:285] * 
	W1213 11:32:58.731642  507417 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:32:58.743673  507417 out.go:203] 
	W1213 11:32:58.746746  507417 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000862989s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:32:58.746885  507417 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 11:32:58.746946  507417 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 11:32:58.751909  507417 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 11:24:52 kubernetes-upgrade-415704 containerd[554]: time="2025-12-13T11:24:52.762728464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:24:52 kubernetes-upgrade-415704 containerd[554]: time="2025-12-13T11:24:52.763575519Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" with image id \"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\", repo tag \"registry.k8s.io/kube-proxy:v1.35.0-beta.0\", repo digest \"registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a\", size \"22429671\" in 1.341533483s"
	Dec 13 11:24:52 kubernetes-upgrade-415704 containerd[554]: time="2025-12-13T11:24:52.763709272Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" returns image reference \"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\""
	Dec 13 11:24:52 kubernetes-upgrade-415704 containerd[554]: time="2025-12-13T11:24:52.764560111Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 13 11:24:54 kubernetes-upgrade-415704 containerd[554]: time="2025-12-13T11:24:54.208564956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:24:54 kubernetes-upgrade-415704 containerd[554]: time="2025-12-13T11:24:54.210456600Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=20453241"
	Dec 13 11:24:54 kubernetes-upgrade-415704 containerd[554]: time="2025-12-13T11:24:54.212815505Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:24:54 kubernetes-upgrade-415704 containerd[554]: time="2025-12-13T11:24:54.216897009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:24:54 kubernetes-upgrade-415704 containerd[554]: time="2025-12-13T11:24:54.218199552Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.453509545s"
	Dec 13 11:24:54 kubernetes-upgrade-415704 containerd[554]: time="2025-12-13T11:24:54.218332829Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\""
	Dec 13 11:24:54 kubernetes-upgrade-415704 containerd[554]: time="2025-12-13T11:24:54.219678392Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\""
	Dec 13 11:24:54 kubernetes-upgrade-415704 containerd[554]: time="2025-12-13T11:24:54.857911462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 13 11:24:54 kubernetes-upgrade-415704 containerd[554]: time="2025-12-13T11:24:54.859766543Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709"
	Dec 13 11:24:54 kubernetes-upgrade-415704 containerd[554]: time="2025-12-13T11:24:54.862162790Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 13 11:24:54 kubernetes-upgrade-415704 containerd[554]: time="2025-12-13T11:24:54.865937264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 13 11:24:54 kubernetes-upgrade-415704 containerd[554]: time="2025-12-13T11:24:54.866617869Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 646.800218ms"
	Dec 13 11:24:54 kubernetes-upgrade-415704 containerd[554]: time="2025-12-13T11:24:54.867055903Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\""
	Dec 13 11:29:46 kubernetes-upgrade-415704 containerd[554]: time="2025-12-13T11:29:46.794617257Z" level=info msg="container event discarded" container=da690dec115153e4d55a9950f9d456a29e8e4394099b062cd5272ee7d3ddb60a type=CONTAINER_DELETED_EVENT
	Dec 13 11:29:46 kubernetes-upgrade-415704 containerd[554]: time="2025-12-13T11:29:46.809951656Z" level=info msg="container event discarded" container=31d108df5678292cd485e6067b9712142c9619c8d8fe0361049349e33c106223 type=CONTAINER_DELETED_EVENT
	Dec 13 11:29:46 kubernetes-upgrade-415704 containerd[554]: time="2025-12-13T11:29:46.820389117Z" level=info msg="container event discarded" container=3a876cdad6f200d12f3663cf299fa71628d4e74884dbc2e96e21f81136e6807d type=CONTAINER_DELETED_EVENT
	Dec 13 11:29:46 kubernetes-upgrade-415704 containerd[554]: time="2025-12-13T11:29:46.820449885Z" level=info msg="container event discarded" container=52265489c358495040a0fa0e62cd3c8a317625316f4b222dcb4fef27d1884056 type=CONTAINER_DELETED_EVENT
	Dec 13 11:29:46 kubernetes-upgrade-415704 containerd[554]: time="2025-12-13T11:29:46.836842698Z" level=info msg="container event discarded" container=d5bfd772e621cb5743e2e48f0ed945e0a67a48bfad8f78e2a24b0c328ea6bcee type=CONTAINER_DELETED_EVENT
	Dec 13 11:29:46 kubernetes-upgrade-415704 containerd[554]: time="2025-12-13T11:29:46.836910284Z" level=info msg="container event discarded" container=c7d1257af54b8b5c4e8348ec468ed5b9d10121980161940914c9db31dd33438b type=CONTAINER_DELETED_EVENT
	Dec 13 11:29:46 kubernetes-upgrade-415704 containerd[554]: time="2025-12-13T11:29:46.852177901Z" level=info msg="container event discarded" container=ce6cdab872bc4d1c3af82562454a72ef823152ef9c541e5a05793b44cfde17e9 type=CONTAINER_DELETED_EVENT
	Dec 13 11:29:46 kubernetes-upgrade-415704 containerd[554]: time="2025-12-13T11:29:46.852240606Z" level=info msg="container event discarded" container=0186d4cf7c92fe7db98173bc4f3da5463123875eee2a0db8defecfa1f2e546ac type=CONTAINER_DELETED_EVENT
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +12.907693] overlayfs: idmapped layers are currently not supported
	[Dec13 09:25] overlayfs: idmapped layers are currently not supported
	[ +26.192425] overlayfs: idmapped layers are currently not supported
	[Dec13 09:26] overlayfs: idmapped layers are currently not supported
	[ +25.729788] overlayfs: idmapped layers are currently not supported
	[Dec13 09:27] overlayfs: idmapped layers are currently not supported
	[Dec13 09:28] overlayfs: idmapped layers are currently not supported
	[Dec13 09:31] overlayfs: idmapped layers are currently not supported
	[Dec13 09:32] overlayfs: idmapped layers are currently not supported
	[ +17.979093] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec13 09:43] overlayfs: idmapped layers are currently not supported
	[Dec13 09:45] overlayfs: idmapped layers are currently not supported
	[ +25.885102] overlayfs: idmapped layers are currently not supported
	[Dec13 09:46] overlayfs: idmapped layers are currently not supported
	[ +22.078149] overlayfs: idmapped layers are currently not supported
	[Dec13 09:47] overlayfs: idmapped layers are currently not supported
	[Dec13 09:48] overlayfs: idmapped layers are currently not supported
	[Dec13 09:49] overlayfs: idmapped layers are currently not supported
	[Dec13 09:51] overlayfs: idmapped layers are currently not supported
	[ +17.043564] overlayfs: idmapped layers are currently not supported
	[Dec13 09:52] overlayfs: idmapped layers are currently not supported
	[Dec13 09:53] overlayfs: idmapped layers are currently not supported
	[Dec13 10:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:19] hrtimer: interrupt took 21247146 ns
	
	
	==> kernel <==
	 11:33:01 up  4:15,  0 user,  load average: 2.82, 1.78, 1.93
	Linux kubernetes-upgrade-415704 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 11:32:57 kubernetes-upgrade-415704 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:32:58 kubernetes-upgrade-415704 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 13 11:32:58 kubernetes-upgrade-415704 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:32:58 kubernetes-upgrade-415704 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:32:58 kubernetes-upgrade-415704 kubelet[14553]: E1213 11:32:58.308430   14553 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:32:58 kubernetes-upgrade-415704 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:32:58 kubernetes-upgrade-415704 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:32:59 kubernetes-upgrade-415704 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 13 11:32:59 kubernetes-upgrade-415704 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:32:59 kubernetes-upgrade-415704 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:32:59 kubernetes-upgrade-415704 kubelet[14598]: E1213 11:32:59.366318   14598 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:32:59 kubernetes-upgrade-415704 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:32:59 kubernetes-upgrade-415704 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:33:00 kubernetes-upgrade-415704 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 13 11:33:00 kubernetes-upgrade-415704 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:33:00 kubernetes-upgrade-415704 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:33:00 kubernetes-upgrade-415704 kubelet[14604]: E1213 11:33:00.354483   14604 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:33:00 kubernetes-upgrade-415704 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:33:00 kubernetes-upgrade-415704 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:33:01 kubernetes-upgrade-415704 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 13 11:33:01 kubernetes-upgrade-415704 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:33:01 kubernetes-upgrade-415704 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:33:01 kubernetes-upgrade-415704 kubelet[14624]: E1213 11:33:01.224623   14624 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:33:01 kubernetes-upgrade-415704 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:33:01 kubernetes-upgrade-415704 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-415704 -n kubernetes-upgrade-415704
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-415704 -n kubernetes-upgrade-415704: exit status 2 (453.937641ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-415704" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-415704" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-415704
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-415704: (2.48188772s)
--- FAIL: TestKubernetesUpgrade (800.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (514.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-333352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-333352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m33.105951171s)

                                                
                                                
-- stdout --
	* [no-preload-333352] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "no-preload-333352" primary control-plane node in "no-preload-333352" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:36:43.271156  568526 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:36:43.271272  568526 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:36:43.271282  568526 out.go:374] Setting ErrFile to fd 2...
	I1213 11:36:43.271287  568526 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:36:43.271548  568526 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 11:36:43.271948  568526 out.go:368] Setting JSON to false
	I1213 11:36:43.272847  568526 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":15556,"bootTime":1765610247,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 11:36:43.272915  568526 start.go:143] virtualization:  
	I1213 11:36:43.277134  568526 out.go:179] * [no-preload-333352] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:36:43.280533  568526 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:36:43.280620  568526 notify.go:221] Checking for updates...
	I1213 11:36:43.287150  568526 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:36:43.290436  568526 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:36:43.293596  568526 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 11:36:43.296649  568526 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:36:43.299585  568526 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:36:43.303329  568526 config.go:182] Loaded profile config "cert-expiration-086397": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 11:36:43.303450  568526 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:36:43.331713  568526 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:36:43.331846  568526 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:36:43.398845  568526 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:36:43.386929703 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:36:43.398967  568526 docker.go:319] overlay module found
	I1213 11:36:43.404300  568526 out.go:179] * Using the docker driver based on user configuration
	I1213 11:36:43.407220  568526 start.go:309] selected driver: docker
	I1213 11:36:43.407243  568526 start.go:927] validating driver "docker" against <nil>
	I1213 11:36:43.407258  568526 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:36:43.407995  568526 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:36:43.465902  568526 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:36:43.456313409 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:36:43.466055  568526 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 11:36:43.466297  568526 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:36:43.469315  568526 out.go:179] * Using Docker driver with root privileges
	I1213 11:36:43.472180  568526 cni.go:84] Creating CNI manager for ""
	I1213 11:36:43.472249  568526 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:36:43.472264  568526 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 11:36:43.472346  568526 start.go:353] cluster config:
	{Name:no-preload-333352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-333352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:36:43.475661  568526 out.go:179] * Starting "no-preload-333352" primary control-plane node in "no-preload-333352" cluster
	I1213 11:36:43.478498  568526 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 11:36:43.481423  568526 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:36:43.484298  568526 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:36:43.484391  568526 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:36:43.484424  568526 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/config.json ...
	I1213 11:36:43.484463  568526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/config.json: {Name:mk3991a34f0bb6dfbecafeec9420c712f053dd5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:36:43.484707  568526 cache.go:107] acquiring lock: {Name:mk31a59cdc41332147a99da115e762325d4c0338 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:36:43.484759  568526 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1213 11:36:43.484776  568526 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 78.311µs
	I1213 11:36:43.484790  568526 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1213 11:36:43.484805  568526 cache.go:107] acquiring lock: {Name:mk35ccdf3fe56b66e694c71ff2d919f143d8dacc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:36:43.484872  568526 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:36:43.485184  568526 cache.go:107] acquiring lock: {Name:mk26d49691f1ca365a0728b2ae008656f80369ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:36:43.485300  568526 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:36:43.485559  568526 cache.go:107] acquiring lock: {Name:mkc6bf22ce18468a92a774694a4b49cbc277f1ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:36:43.485658  568526 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:36:43.485868  568526 cache.go:107] acquiring lock: {Name:mk2ae32cc20ed4877d34af62f362936effddd88e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:36:43.485976  568526 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:36:43.486170  568526 cache.go:107] acquiring lock: {Name:mkc81502ef492ecd96689a43cd1ba75bb4269f1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:36:43.486248  568526 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1213 11:36:43.486262  568526 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 96.214µs
	I1213 11:36:43.486269  568526 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1213 11:36:43.486281  568526 cache.go:107] acquiring lock: {Name:mk8c5f5248a840d1f1002cf2ef82275f7d10aa22 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:36:43.486315  568526 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1213 11:36:43.486325  568526 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 45.219µs
	I1213 11:36:43.486331  568526 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1213 11:36:43.486341  568526 cache.go:107] acquiring lock: {Name:mk23fe723c287cca56429f89071149f1d96bb4dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:36:43.486418  568526 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:36:43.488378  568526 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:36:43.488967  568526 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:36:43.489552  568526 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:36:43.490137  568526 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:36:43.491513  568526 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:36:43.515610  568526 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:36:43.515631  568526 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:36:43.515646  568526 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:36:43.515680  568526 start.go:360] acquireMachinesLock for no-preload-333352: {Name:mkcf6f110441e125d79b38a8f8cc1a9606a821b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:36:43.515781  568526 start.go:364] duration metric: took 85.704µs to acquireMachinesLock for "no-preload-333352"
	I1213 11:36:43.515805  568526 start.go:93] Provisioning new machine with config: &{Name:no-preload-333352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-333352 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 11:36:43.515869  568526 start.go:125] createHost starting for "" (driver="docker")
	I1213 11:36:43.519980  568526 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 11:36:43.526918  568526 start.go:159] libmachine.API.Create for "no-preload-333352" (driver="docker")
	I1213 11:36:43.526975  568526 client.go:173] LocalClient.Create starting
	I1213 11:36:43.527038  568526 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem
	I1213 11:36:43.527075  568526 main.go:143] libmachine: Decoding PEM data...
	I1213 11:36:43.527094  568526 main.go:143] libmachine: Parsing certificate...
	I1213 11:36:43.527146  568526 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem
	I1213 11:36:43.527177  568526 main.go:143] libmachine: Decoding PEM data...
	I1213 11:36:43.527189  568526 main.go:143] libmachine: Parsing certificate...
	I1213 11:36:43.527555  568526 cli_runner.go:164] Run: docker network inspect no-preload-333352 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 11:36:43.546230  568526 cli_runner.go:211] docker network inspect no-preload-333352 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 11:36:43.546313  568526 network_create.go:284] running [docker network inspect no-preload-333352] to gather additional debugging logs...
	I1213 11:36:43.546336  568526 cli_runner.go:164] Run: docker network inspect no-preload-333352
	W1213 11:36:43.563112  568526 cli_runner.go:211] docker network inspect no-preload-333352 returned with exit code 1
	I1213 11:36:43.563149  568526 network_create.go:287] error running [docker network inspect no-preload-333352]: docker network inspect no-preload-333352: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-333352 not found
	I1213 11:36:43.563163  568526 network_create.go:289] output of [docker network inspect no-preload-333352]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-333352 not found
	
	** /stderr **
	I1213 11:36:43.563262  568526 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:36:43.580941  568526 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-381e4ce3c9ab IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:2d:23:57:0e:cc} reservation:<nil>}
	I1213 11:36:43.581316  568526 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bd1082d121b0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7a:42:ce:41:ea:ae} reservation:<nil>}
	I1213 11:36:43.581760  568526 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ebeb7162e340 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:cf:aa:41:ac:19} reservation:<nil>}
	I1213 11:36:43.582058  568526 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c2e564c69fa8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4a:61:c9:72:a3:16} reservation:<nil>}
	I1213 11:36:43.582499  568526 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c603e0}
	I1213 11:36:43.582523  568526 network_create.go:124] attempt to create docker network no-preload-333352 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1213 11:36:43.582583  568526 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-333352 no-preload-333352
	I1213 11:36:43.651890  568526 network_create.go:108] docker network no-preload-333352 192.168.85.0/24 created
	I1213 11:36:43.651921  568526 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-333352" container
	I1213 11:36:43.652011  568526 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 11:36:43.669691  568526 cli_runner.go:164] Run: docker volume create no-preload-333352 --label name.minikube.sigs.k8s.io=no-preload-333352 --label created_by.minikube.sigs.k8s.io=true
	I1213 11:36:43.687787  568526 oci.go:103] Successfully created a docker volume no-preload-333352
	I1213 11:36:43.687874  568526 cli_runner.go:164] Run: docker run --rm --name no-preload-333352-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-333352 --entrypoint /usr/bin/test -v no-preload-333352:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 11:36:43.818459  568526 cache.go:162] opening:  /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1213 11:36:43.829622  568526 cache.go:162] opening:  /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1213 11:36:43.858381  568526 cache.go:162] opening:  /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1213 11:36:43.859963  568526 cache.go:162] opening:  /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1213 11:36:43.898924  568526 cache.go:162] opening:  /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1213 11:36:44.180098  568526 cache.go:157] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1213 11:36:44.180179  568526 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 694.309193ms
	I1213 11:36:44.180226  568526 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1213 11:36:44.386227  568526 oci.go:107] Successfully prepared a docker volume no-preload-333352
	I1213 11:36:44.386266  568526 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1213 11:36:44.386390  568526 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 11:36:44.386489  568526 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 11:36:44.489804  568526 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-333352 --name no-preload-333352 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-333352 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-333352 --network no-preload-333352 --ip 192.168.85.2 --volume no-preload-333352:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 11:36:44.860581  568526 cache.go:157] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1213 11:36:44.860612  568526 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 1.375053618s
	I1213 11:36:44.860737  568526 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1213 11:36:44.915861  568526 cache.go:157] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1213 11:36:44.915884  568526 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.430703117s
	I1213 11:36:44.915895  568526 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1213 11:36:44.941319  568526 cache.go:157] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1213 11:36:44.941356  568526 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 1.456549697s
	I1213 11:36:44.941370  568526 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1213 11:36:44.971378  568526 cache.go:157] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1213 11:36:44.971408  568526 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.485066006s
	I1213 11:36:44.971421  568526 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1213 11:36:44.971439  568526 cache.go:87] Successfully saved all images to host disk.
	I1213 11:36:44.998787  568526 cli_runner.go:164] Run: docker container inspect no-preload-333352 --format={{.State.Running}}
	I1213 11:36:45.036466  568526 cli_runner.go:164] Run: docker container inspect no-preload-333352 --format={{.State.Status}}
	I1213 11:36:45.116623  568526 cli_runner.go:164] Run: docker exec no-preload-333352 stat /var/lib/dpkg/alternatives/iptables
	I1213 11:36:45.259261  568526 oci.go:144] the created container "no-preload-333352" has a running status.
	I1213 11:36:45.259301  568526 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa...
	I1213 11:36:45.895344  568526 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 11:36:45.931819  568526 cli_runner.go:164] Run: docker container inspect no-preload-333352 --format={{.State.Status}}
	I1213 11:36:45.988654  568526 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 11:36:45.988675  568526 kic_runner.go:114] Args: [docker exec --privileged no-preload-333352 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 11:36:46.071571  568526 cli_runner.go:164] Run: docker container inspect no-preload-333352 --format={{.State.Status}}
	I1213 11:36:46.129709  568526 machine.go:94] provisionDockerMachine start ...
	I1213 11:36:46.129813  568526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:36:46.205156  568526 main.go:143] libmachine: Using SSH client type: native
	I1213 11:36:46.205524  568526 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1213 11:36:46.205535  568526 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:36:46.206946  568526 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 11:36:49.384199  568526 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-333352
	
	I1213 11:36:49.384221  568526 ubuntu.go:182] provisioning hostname "no-preload-333352"
	I1213 11:36:49.384297  568526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:36:49.418310  568526 main.go:143] libmachine: Using SSH client type: native
	I1213 11:36:49.418621  568526 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1213 11:36:49.418638  568526 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-333352 && echo "no-preload-333352" | sudo tee /etc/hostname
	I1213 11:36:49.595402  568526 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-333352
	
	I1213 11:36:49.595548  568526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:36:49.629888  568526 main.go:143] libmachine: Using SSH client type: native
	I1213 11:36:49.630192  568526 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1213 11:36:49.630208  568526 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-333352' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-333352/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-333352' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:36:49.795671  568526 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:36:49.795701  568526 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 11:36:49.795728  568526 ubuntu.go:190] setting up certificates
	I1213 11:36:49.795746  568526 provision.go:84] configureAuth start
	I1213 11:36:49.795816  568526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-333352
	I1213 11:36:49.820580  568526 provision.go:143] copyHostCerts
	I1213 11:36:49.820650  568526 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 11:36:49.820664  568526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 11:36:49.820740  568526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 11:36:49.820836  568526 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 11:36:49.820847  568526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 11:36:49.820874  568526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 11:36:49.820931  568526 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 11:36:49.820941  568526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 11:36:49.820964  568526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 11:36:49.821041  568526 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.no-preload-333352 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-333352]
	I1213 11:36:50.097748  568526 provision.go:177] copyRemoteCerts
	I1213 11:36:50.097870  568526 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:36:50.097955  568526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:36:50.118616  568526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:36:50.229817  568526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:36:50.249882  568526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 11:36:50.272739  568526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 11:36:50.295685  568526 provision.go:87] duration metric: took 499.908801ms to configureAuth
	I1213 11:36:50.295716  568526 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:36:50.295897  568526 config.go:182] Loaded profile config "no-preload-333352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:36:50.295913  568526 machine.go:97] duration metric: took 4.166175275s to provisionDockerMachine
	I1213 11:36:50.295920  568526 client.go:176] duration metric: took 6.768939299s to LocalClient.Create
	I1213 11:36:50.295935  568526 start.go:167] duration metric: took 6.769022056s to libmachine.API.Create "no-preload-333352"
	I1213 11:36:50.295946  568526 start.go:293] postStartSetup for "no-preload-333352" (driver="docker")
	I1213 11:36:50.295956  568526 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:36:50.296006  568526 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:36:50.296048  568526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:36:50.315969  568526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:36:50.444747  568526 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:36:50.457552  568526 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:36:50.457580  568526 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:36:50.457592  568526 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 11:36:50.457658  568526 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 11:36:50.457752  568526 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 11:36:50.457853  568526 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:36:50.470410  568526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:36:50.516210  568526 start.go:296] duration metric: took 220.250118ms for postStartSetup
	I1213 11:36:50.516592  568526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-333352
	I1213 11:36:50.558057  568526 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/config.json ...
	I1213 11:36:50.558336  568526 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:36:50.558379  568526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:36:50.587255  568526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:36:50.699833  568526 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:36:50.705873  568526 start.go:128] duration metric: took 7.189990927s to createHost
	I1213 11:36:50.705907  568526 start.go:83] releasing machines lock for "no-preload-333352", held for 7.190117731s
	I1213 11:36:50.705978  568526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-333352
	I1213 11:36:50.735704  568526 ssh_runner.go:195] Run: cat /version.json
	I1213 11:36:50.735754  568526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:36:50.735980  568526 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:36:50.736034  568526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:36:50.769043  568526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:36:50.780279  568526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:36:51.018474  568526 ssh_runner.go:195] Run: systemctl --version
	I1213 11:36:51.027814  568526 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:36:51.034132  568526 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:36:51.034211  568526 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:36:51.073120  568526 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 11:36:51.073147  568526 start.go:496] detecting cgroup driver to use...
	I1213 11:36:51.073179  568526 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:36:51.073237  568526 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:36:51.091587  568526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:36:51.107888  568526 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:36:51.107960  568526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:36:51.128427  568526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:36:51.153138  568526 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:36:51.344754  568526 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:36:51.526941  568526 docker.go:234] disabling docker service ...
	I1213 11:36:51.527008  568526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:36:51.563229  568526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:36:51.588919  568526 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:36:51.784421  568526 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:36:51.991458  568526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:36:52.009049  568526 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:36:52.031137  568526 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 11:36:52.041450  568526 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:36:52.052163  568526 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:36:52.052232  568526 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:36:52.062605  568526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:36:52.073177  568526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:36:52.085636  568526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:36:52.096712  568526 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:36:52.106384  568526 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:36:52.116784  568526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:36:52.127811  568526 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:36:52.138776  568526 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:36:52.148851  568526 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:36:52.158524  568526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:36:52.331173  568526 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:36:52.472969  568526 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 11:36:52.473049  568526 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 11:36:52.479535  568526 start.go:564] Will wait 60s for crictl version
	I1213 11:36:52.479599  568526 ssh_runner.go:195] Run: which crictl
	I1213 11:36:52.483856  568526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:36:52.555951  568526 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 11:36:52.556020  568526 ssh_runner.go:195] Run: containerd --version
	I1213 11:36:52.586781  568526 ssh_runner.go:195] Run: containerd --version
	I1213 11:36:52.617946  568526 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 11:36:52.620858  568526 cli_runner.go:164] Run: docker network inspect no-preload-333352 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:36:52.642618  568526 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 11:36:52.646888  568526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:36:52.657803  568526 kubeadm.go:884] updating cluster {Name:no-preload-333352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-333352 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:36:52.657920  568526 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:36:52.657979  568526 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:36:52.688368  568526 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1213 11:36:52.688402  568526 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1213 11:36:52.688459  568526 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:36:52.688675  568526 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:36:52.688789  568526 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:36:52.688879  568526 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:36:52.688961  568526 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:36:52.689049  568526 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1213 11:36:52.689129  568526 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1213 11:36:52.689215  568526 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:36:52.691456  568526 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1213 11:36:52.691766  568526 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:36:52.691790  568526 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1213 11:36:52.691848  568526 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:36:52.691913  568526 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:36:52.692011  568526 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:36:52.692053  568526 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:36:52.692271  568526 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:36:52.940047  568526 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1213 11:36:52.940122  568526 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1213 11:36:52.956897  568526 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1213 11:36:52.957027  568526 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1213 11:36:52.958569  568526 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1213 11:36:52.958674  568526 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:36:53.013528  568526 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1213 11:36:53.013643  568526 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:36:53.019936  568526 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1213 11:36:53.019983  568526 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1213 11:36:53.020053  568526 ssh_runner.go:195] Run: which crictl
	I1213 11:36:53.026843  568526 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1213 11:36:53.026910  568526 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1213 11:36:53.026962  568526 ssh_runner.go:195] Run: which crictl
	I1213 11:36:53.026993  568526 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1213 11:36:53.027040  568526 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:36:53.027448  568526 ssh_runner.go:195] Run: which crictl
	I1213 11:36:53.027370  568526 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1213 11:36:53.027598  568526 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:36:53.043410  568526 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1213 11:36:53.043478  568526 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:36:53.059253  568526 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1213 11:36:53.059325  568526 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:36:53.061831  568526 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1213 11:36:53.061891  568526 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:36:53.061952  568526 ssh_runner.go:195] Run: which crictl
	I1213 11:36:53.062081  568526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 11:36:53.064303  568526 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1213 11:36:53.064345  568526 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:36:53.064412  568526 ssh_runner.go:195] Run: which crictl
	I1213 11:36:53.064505  568526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 11:36:53.064579  568526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:36:53.081277  568526 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1213 11:36:53.081325  568526 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:36:53.081397  568526 ssh_runner.go:195] Run: which crictl
	I1213 11:36:53.122261  568526 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1213 11:36:53.122309  568526 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:36:53.122383  568526 ssh_runner.go:195] Run: which crictl
	I1213 11:36:53.135403  568526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 11:36:53.135415  568526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:36:53.135557  568526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:36:53.135565  568526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:36:53.135644  568526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 11:36:53.135719  568526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:36:53.135734  568526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:36:53.246779  568526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:36:53.246923  568526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 11:36:53.247014  568526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:36:53.247070  568526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:36:53.247078  568526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:36:53.247141  568526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 11:36:53.247208  568526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:36:53.340920  568526 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1213 11:36:53.341038  568526 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1213 11:36:53.347942  568526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:36:53.348158  568526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:36:53.348223  568526 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1213 11:36:53.348302  568526 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1213 11:36:53.348347  568526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:36:53.348393  568526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:36:53.348465  568526 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1213 11:36:53.348516  568526 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1213 11:36:53.348696  568526 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1213 11:36:53.348719  568526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1213 11:36:53.451050  568526 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1213 11:36:53.451460  568526 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1213 11:36:53.451222  568526 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1213 11:36:53.451649  568526 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1213 11:36:53.451254  568526 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1213 11:36:53.451766  568526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1213 11:36:53.451288  568526 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1213 11:36:53.451896  568526 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1213 11:36:53.451312  568526 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1213 11:36:53.451965  568526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1213 11:36:53.451343  568526 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1213 11:36:53.452098  568526 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1213 11:36:53.502865  568526 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1213 11:36:53.502960  568526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1213 11:36:53.503071  568526 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1213 11:36:53.503135  568526 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1213 11:36:53.503167  568526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1213 11:36:53.503234  568526 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1213 11:36:53.503245  568526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1213 11:36:53.503332  568526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	W1213 11:36:53.517702  568526 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1213 11:36:53.517811  568526 retry.go:31] will retry after 240.561611ms: ssh: rejected: connect failed (open failed)
	I1213 11:36:53.522873  568526 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1213 11:36:53.523000  568526 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1213 11:36:53.523191  568526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:36:53.588360  568526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	W1213 11:36:54.027484  568526 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1213 11:36:54.027624  568526 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1213 11:36:54.027679  568526 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:36:54.075392  568526 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1213 11:36:54.196643  568526 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1213 11:36:54.196758  568526 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:36:54.196836  568526 ssh_runner.go:195] Run: which crictl
	I1213 11:36:54.235464  568526 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1213 11:36:54.235587  568526 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1213 11:36:54.240162  568526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:36:56.088843  568526 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.853208622s)
	I1213 11:36:56.088873  568526 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1213 11:36:56.088891  568526 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1213 11:36:56.088950  568526 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1213 11:36:56.089014  568526 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.848792254s)
	I1213 11:36:56.089049  568526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:36:56.134732  568526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:36:57.522988  568526 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.434002384s)
	I1213 11:36:57.523012  568526 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1213 11:36:57.523031  568526 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1213 11:36:57.523080  568526 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1213 11:36:57.523140  568526 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.388388901s)
	I1213 11:36:57.523163  568526 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1213 11:36:57.523231  568526 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1213 11:36:58.980349  568526 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.457096744s)
	I1213 11:36:58.980378  568526 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1213 11:36:58.980404  568526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1213 11:36:58.980478  568526 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.457382934s)
	I1213 11:36:58.980487  568526 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1213 11:36:58.980503  568526 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1213 11:36:58.980546  568526 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1213 11:37:02.224995  568526 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (3.244428273s)
	I1213 11:37:02.225020  568526 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1213 11:37:02.225038  568526 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1213 11:37:02.225083  568526 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1213 11:37:03.225151  568526 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.000046112s)
	I1213 11:37:03.225176  568526 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1213 11:37:03.225193  568526 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1213 11:37:03.225244  568526 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1213 11:37:04.640951  568526 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.415685467s)
	I1213 11:37:04.640981  568526 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1213 11:37:04.640998  568526 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1213 11:37:04.641107  568526 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1213 11:37:05.014574  568526 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1213 11:37:05.014617  568526 cache_images.go:125] Successfully loaded all cached images
	I1213 11:37:05.014624  568526 cache_images.go:94] duration metric: took 12.326208256s to LoadCachedImages
	I1213 11:37:05.014637  568526 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 11:37:05.014748  568526 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-333352 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-333352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:37:05.014825  568526 ssh_runner.go:195] Run: sudo crictl info
	I1213 11:37:05.040412  568526 cni.go:84] Creating CNI manager for ""
	I1213 11:37:05.040438  568526 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:37:05.040456  568526 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 11:37:05.040483  568526 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-333352 NodeName:no-preload-333352 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:37:05.040610  568526 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-333352"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:37:05.040685  568526 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:37:05.048508  568526 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1213 11:37:05.048575  568526 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:37:05.056257  568526 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1213 11:37:05.056349  568526 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1213 11:37:05.057081  568526 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet
	I1213 11:37:05.057086  568526 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm
	I1213 11:37:05.061048  568526 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1213 11:37:05.061085  568526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1213 11:37:05.922161  568526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:37:05.938450  568526 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1213 11:37:05.942769  568526 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1213 11:37:05.942809  568526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1213 11:37:05.959296  568526 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1213 11:37:05.963273  568526 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1213 11:37:05.963314  568526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1213 11:37:06.675725  568526 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:37:06.688100  568526 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 11:37:06.701887  568526 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:37:06.718404  568526 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 11:37:06.732669  568526 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:37:06.736855  568526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:37:06.747297  568526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:37:06.890597  568526 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:37:06.911412  568526 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352 for IP: 192.168.85.2
	I1213 11:37:06.911431  568526 certs.go:195] generating shared ca certs ...
	I1213 11:37:06.911448  568526 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:37:06.911603  568526 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 11:37:06.911647  568526 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 11:37:06.911654  568526 certs.go:257] generating profile certs ...
	I1213 11:37:06.911708  568526 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/client.key
	I1213 11:37:06.911719  568526 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/client.crt with IP's: []
	I1213 11:37:07.330021  568526 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/client.crt ...
	I1213 11:37:07.330098  568526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/client.crt: {Name:mk1e2bcd17bb1cf2a14e3226bad8f8a4061a17d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:37:07.330358  568526 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/client.key ...
	I1213 11:37:07.330399  568526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/client.key: {Name:mke3f63967e3a76da8fb9f0edce4f680807523f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:37:07.330556  568526 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/apiserver.key.cd574fc3
	I1213 11:37:07.330597  568526 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/apiserver.crt.cd574fc3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 11:37:07.980037  568526 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/apiserver.crt.cd574fc3 ...
	I1213 11:37:07.980072  568526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/apiserver.crt.cd574fc3: {Name:mk9235ca6ac1110b4cf4a570a3fe20042e1329f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:37:07.980314  568526 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/apiserver.key.cd574fc3 ...
	I1213 11:37:07.980334  568526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/apiserver.key.cd574fc3: {Name:mk7b16a7d1532cfe1679a5c8721ac823fcc75e36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:37:07.980477  568526 certs.go:382] copying /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/apiserver.crt.cd574fc3 -> /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/apiserver.crt
	I1213 11:37:07.980577  568526 certs.go:386] copying /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/apiserver.key.cd574fc3 -> /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/apiserver.key
	I1213 11:37:07.980676  568526 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/proxy-client.key
	I1213 11:37:07.980726  568526 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/proxy-client.crt with IP's: []
	I1213 11:37:08.449812  568526 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/proxy-client.crt ...
	I1213 11:37:08.453449  568526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/proxy-client.crt: {Name:mk2e4bdf35284af3b1eb74e17a80aa92ea538da1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:37:08.453703  568526 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/proxy-client.key ...
	I1213 11:37:08.453740  568526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/proxy-client.key: {Name:mke82fb8a1172e3b26bd1e903b4882906881f9c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:37:08.454000  568526 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 11:37:08.454072  568526 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 11:37:08.454098  568526 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:37:08.454165  568526 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:37:08.454223  568526 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:37:08.454273  568526 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 11:37:08.454363  568526 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:37:08.455027  568526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:37:08.493136  568526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:37:08.533519  568526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:37:08.555204  568526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:37:08.581960  568526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 11:37:08.605933  568526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:37:08.631210  568526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:37:08.651291  568526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:37:08.677371  568526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 11:37:08.711192  568526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:37:08.732192  568526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 11:37:08.756239  568526 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:37:08.772087  568526 ssh_runner.go:195] Run: openssl version
	I1213 11:37:08.779992  568526 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:37:08.788684  568526 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:37:08.798626  568526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:37:08.804189  568526 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:37:08.804259  568526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:37:08.849940  568526 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:37:08.857756  568526 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 11:37:08.865365  568526 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 11:37:08.875401  568526 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 11:37:08.884021  568526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 11:37:08.888421  568526 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 11:37:08.888486  568526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 11:37:08.931373  568526 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:37:08.939408  568526 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/308915.pem /etc/ssl/certs/51391683.0
	I1213 11:37:08.947237  568526 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 11:37:08.958715  568526 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 11:37:08.967350  568526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 11:37:08.971920  568526 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 11:37:08.972039  568526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 11:37:09.020965  568526 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:37:09.030508  568526 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3089152.pem /etc/ssl/certs/3ec20f2e.0
	I1213 11:37:09.039654  568526 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:37:09.044472  568526 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 11:37:09.044525  568526 kubeadm.go:401] StartCluster: {Name:no-preload-333352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-333352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:37:09.044608  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 11:37:09.044719  568526 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:37:09.079472  568526 cri.go:89] found id: ""
	I1213 11:37:09.079541  568526 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:37:09.095781  568526 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:37:09.108858  568526 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:37:09.108938  568526 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:37:09.119723  568526 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:37:09.119753  568526 kubeadm.go:158] found existing configuration files:
	
	I1213 11:37:09.119805  568526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:37:09.132592  568526 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:37:09.132680  568526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:37:09.141619  568526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:37:09.149744  568526 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:37:09.149822  568526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:37:09.157451  568526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:37:09.176825  568526 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:37:09.176914  568526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:37:09.195067  568526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:37:09.224240  568526 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:37:09.224314  568526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:37:09.245980  568526 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:37:09.304011  568526 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:37:09.305724  568526 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:37:09.412860  568526 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:37:09.412949  568526 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:37:09.413000  568526 kubeadm.go:319] OS: Linux
	I1213 11:37:09.413059  568526 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:37:09.413125  568526 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:37:09.413178  568526 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:37:09.413239  568526 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:37:09.413300  568526 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:37:09.413366  568526 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:37:09.413428  568526 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:37:09.413498  568526 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:37:09.413559  568526 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:37:09.517095  568526 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:37:09.517219  568526 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:37:09.517329  568526 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:37:09.537279  568526 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:37:09.543461  568526 out.go:252]   - Generating certificates and keys ...
	I1213 11:37:09.543569  568526 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:37:09.543650  568526 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:37:09.743752  568526 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 11:37:09.920231  568526 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 11:37:10.552302  568526 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 11:37:10.755379  568526 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 11:37:10.972275  568526 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 11:37:10.972422  568526 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-333352] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 11:37:11.179780  568526 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 11:37:11.180400  568526 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-333352] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 11:37:11.311883  568526 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 11:37:11.955739  568526 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 11:37:12.415065  568526 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 11:37:12.418424  568526 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:37:12.583040  568526 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:37:12.943299  568526 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:37:13.120488  568526 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:37:13.520165  568526 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:37:13.770613  568526 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:37:13.771685  568526 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:37:13.774606  568526 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:37:13.777958  568526 out.go:252]   - Booting up control plane ...
	I1213 11:37:13.778079  568526 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:37:13.778160  568526 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:37:13.782992  568526 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:37:13.816509  568526 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:37:13.816628  568526 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:37:13.826164  568526 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:37:13.826270  568526 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:37:13.826315  568526 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:37:13.999172  568526 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:37:13.999302  568526 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:41:14.001733  568526 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.002584193s
	I1213 11:41:14.001783  568526 kubeadm.go:319] 
	I1213 11:41:14.001839  568526 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:41:14.001875  568526 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:41:14.001979  568526 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:41:14.001989  568526 kubeadm.go:319] 
	I1213 11:41:14.002087  568526 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:41:14.002121  568526 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:41:14.002155  568526 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:41:14.002163  568526 kubeadm.go:319] 
	I1213 11:41:14.006503  568526 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:41:14.006996  568526 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:41:14.007125  568526 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:41:14.007362  568526 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:41:14.007370  568526 kubeadm.go:319] 
	I1213 11:41:14.007439  568526 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 11:41:14.007568  568526 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-333352] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-333352] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.002584193s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-333352] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-333352] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.002584193s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 11:41:14.007669  568526 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 11:41:14.426171  568526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:41:14.440059  568526 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:41:14.440125  568526 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:41:14.448328  568526 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:41:14.448349  568526 kubeadm.go:158] found existing configuration files:
	
	I1213 11:41:14.448402  568526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:41:14.456296  568526 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:41:14.456363  568526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:41:14.464058  568526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:41:14.472162  568526 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:41:14.472226  568526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:41:14.480152  568526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:41:14.488408  568526 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:41:14.488487  568526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:41:14.495870  568526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:41:14.504201  568526 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:41:14.504277  568526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:41:14.512663  568526 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:41:14.552571  568526 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:41:14.552826  568526 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:41:14.636263  568526 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:41:14.636422  568526 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:41:14.636489  568526 kubeadm.go:319] OS: Linux
	I1213 11:41:14.636539  568526 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:41:14.636588  568526 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:41:14.636636  568526 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:41:14.636685  568526 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:41:14.636734  568526 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:41:14.636789  568526 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:41:14.636835  568526 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:41:14.636883  568526 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:41:14.636931  568526 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:41:14.704766  568526 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:41:14.704881  568526 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:41:14.704980  568526 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:41:14.711349  568526 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:41:14.716890  568526 out.go:252]   - Generating certificates and keys ...
	I1213 11:41:14.717059  568526 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:41:14.717173  568526 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:41:14.717297  568526 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 11:41:14.717400  568526 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 11:41:14.717519  568526 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 11:41:14.717610  568526 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 11:41:14.717711  568526 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 11:41:14.717781  568526 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 11:41:14.717862  568526 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 11:41:14.717939  568526 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 11:41:14.717980  568526 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 11:41:14.718040  568526 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:41:14.890315  568526 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:41:15.082140  568526 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:41:15.215199  568526 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:41:15.487457  568526 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:41:15.706896  568526 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:41:15.707589  568526 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:41:15.710265  568526 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:41:15.713385  568526 out.go:252]   - Booting up control plane ...
	I1213 11:41:15.713507  568526 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:41:15.713599  568526 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:41:15.713985  568526 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:41:15.742850  568526 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:41:15.742960  568526 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:41:15.749220  568526 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:41:15.751042  568526 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:41:15.751096  568526 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:41:15.899409  568526 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:41:15.899529  568526 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:45:15.897209  568526 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000248006s
	I1213 11:45:15.897238  568526 kubeadm.go:319] 
	I1213 11:45:15.897296  568526 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:45:15.897329  568526 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:45:15.897444  568526 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:45:15.897460  568526 kubeadm.go:319] 
	I1213 11:45:15.897565  568526 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:45:15.897602  568526 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:45:15.897639  568526 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:45:15.897647  568526 kubeadm.go:319] 
	I1213 11:45:15.901779  568526 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:45:15.902208  568526 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:45:15.902322  568526 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:45:15.902560  568526 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:45:15.902570  568526 kubeadm.go:319] 
	I1213 11:45:15.902639  568526 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 11:45:15.902721  568526 kubeadm.go:403] duration metric: took 8m6.858200115s to StartCluster
	I1213 11:45:15.902773  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:45:15.902832  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:45:15.928207  568526 cri.go:89] found id: ""
	I1213 11:45:15.928245  568526 logs.go:282] 0 containers: []
	W1213 11:45:15.928254  568526 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:45:15.928262  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:45:15.928320  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:45:15.954471  568526 cri.go:89] found id: ""
	I1213 11:45:15.954508  568526 logs.go:282] 0 containers: []
	W1213 11:45:15.954520  568526 logs.go:284] No container was found matching "etcd"
	I1213 11:45:15.954531  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:45:15.954610  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:45:15.979197  568526 cri.go:89] found id: ""
	I1213 11:45:15.979226  568526 logs.go:282] 0 containers: []
	W1213 11:45:15.979236  568526 logs.go:284] No container was found matching "coredns"
	I1213 11:45:15.979243  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:45:15.979315  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:45:16.008075  568526 cri.go:89] found id: ""
	I1213 11:45:16.008098  568526 logs.go:282] 0 containers: []
	W1213 11:45:16.008107  568526 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:45:16.008118  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:45:16.008193  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:45:16.034169  568526 cri.go:89] found id: ""
	I1213 11:45:16.034191  568526 logs.go:282] 0 containers: []
	W1213 11:45:16.034199  568526 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:45:16.034207  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:45:16.034265  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:45:16.058826  568526 cri.go:89] found id: ""
	I1213 11:45:16.058854  568526 logs.go:282] 0 containers: []
	W1213 11:45:16.058862  568526 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:45:16.058869  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:45:16.058928  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:45:16.083123  568526 cri.go:89] found id: ""
	I1213 11:45:16.083151  568526 logs.go:282] 0 containers: []
	W1213 11:45:16.083160  568526 logs.go:284] No container was found matching "kindnet"
	I1213 11:45:16.083171  568526 logs.go:123] Gathering logs for dmesg ...
	I1213 11:45:16.083184  568526 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:45:16.100676  568526 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:45:16.100707  568526 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:45:16.166022  568526 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:45:16.157896    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.158620    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.160237    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.160701    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.162431    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:45:16.157896    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.158620    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.160237    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.160701    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.162431    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:45:16.166047  568526 logs.go:123] Gathering logs for containerd ...
	I1213 11:45:16.166060  568526 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:45:16.207842  568526 logs.go:123] Gathering logs for container status ...
	I1213 11:45:16.207880  568526 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:45:16.235350  568526 logs.go:123] Gathering logs for kubelet ...
	I1213 11:45:16.235376  568526 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 11:45:16.294386  568526 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248006s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 11:45:16.294456  568526 out.go:285] * 
	* 
	W1213 11:45:16.294516  568526 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248006s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248006s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:45:16.294544  568526 out.go:285] * 
	* 
	W1213 11:45:16.296685  568526 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:45:16.302469  568526 out.go:203] 
	W1213 11:45:16.306292  568526 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248006s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248006s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:45:16.306365  568526 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 11:45:16.306395  568526 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 11:45:16.309964  568526 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p no-preload-333352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-333352
helpers_test.go:244: (dbg) docker inspect no-preload-333352:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db",
	        "Created": "2025-12-13T11:36:44.52795509Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 568910,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:36:44.610473104Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/hostname",
	        "HostsPath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/hosts",
	        "LogPath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db-json.log",
	        "Name": "/no-preload-333352",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-333352:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-333352",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db",
	                "LowerDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891-init/diff:/var/lib/docker/overlay2/5d0ca86320f2c06765995d644a73af30cd6ddb36f5c1a2a6b1ebb24af53cdabd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-333352",
	                "Source": "/var/lib/docker/volumes/no-preload-333352/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-333352",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-333352",
	                "name.minikube.sigs.k8s.io": "no-preload-333352",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "661b691bd6512fd4efbd202820a9bae1c5beb21cce06578707e71b64c02a0d52",
	            "SandboxKey": "/var/run/docker/netns/661b691bd651",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33406"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33409"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33407"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33408"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-333352": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:23:72:9e:c3:20",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ee20fc50f482b31273047147a2f419c36704bb98933537d0ac5901a560402043",
	                    "EndpointID": "eaefa46f6237ec9d0c60ef1c735019996dda65756a613e136b17ca120c60027b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-333352",
	                        "ca124efb8aeb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-333352 -n no-preload-333352
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-333352 -n no-preload-333352: exit status 6 (353.833184ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 11:45:16.778161  594049 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-333352" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-333352 logs -n 25
E1213 11:45:17.535243  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/default-k8s-diff-port-191845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:261: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p no-preload-333352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:36 UTC │                     │
	│ start   │ -p cert-expiration-086397 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                            │ cert-expiration-086397       │ jenkins │ v1.37.0 │ 13 Dec 25 11:36 UTC │ 13 Dec 25 11:36 UTC │
	│ delete  │ -p cert-expiration-086397                                                                                                                                                                                                                                  │ cert-expiration-086397       │ jenkins │ v1.37.0 │ 13 Dec 25 11:36 UTC │ 13 Dec 25 11:36 UTC │
	│ start   │ -p embed-certs-951675 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:36 UTC │ 13 Dec 25 11:37 UTC │
	│ addons  │ enable metrics-server -p embed-certs-951675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:38 UTC │
	│ stop    │ -p embed-certs-951675 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:38 UTC │
	│ addons  │ enable dashboard -p embed-certs-951675 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:38 UTC │
	│ start   │ -p embed-certs-951675 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:39 UTC │
	│ image   │ embed-certs-951675 image list --format=json                                                                                                                                                                                                                │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ pause   │ -p embed-certs-951675 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ unpause │ -p embed-certs-951675 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p embed-certs-951675                                                                                                                                                                                                                                      │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p embed-certs-951675                                                                                                                                                                                                                                      │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p disable-driver-mounts-823668                                                                                                                                                                                                                            │ disable-driver-mounts-823668 │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ start   │ -p default-k8s-diff-port-191845 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:40 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-191845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ stop    │ -p default-k8s-diff-port-191845 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-191845 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ start   │ -p default-k8s-diff-port-191845 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:41 UTC │
	│ image   │ default-k8s-diff-port-191845 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ pause   │ -p default-k8s-diff-port-191845 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ unpause │ -p default-k8s-diff-port-191845 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ delete  │ -p default-k8s-diff-port-191845                                                                                                                                                                                                                            │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ delete  │ -p default-k8s-diff-port-191845                                                                                                                                                                                                                            │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ start   │ -p newest-cni-796924 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:41:40
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:41:40.611522  589123 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:41:40.611651  589123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:41:40.611663  589123 out.go:374] Setting ErrFile to fd 2...
	I1213 11:41:40.611668  589123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:41:40.611912  589123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 11:41:40.612341  589123 out.go:368] Setting JSON to false
	I1213 11:41:40.613214  589123 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":15853,"bootTime":1765610247,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 11:41:40.613281  589123 start.go:143] virtualization:  
	I1213 11:41:40.617550  589123 out.go:179] * [newest-cni-796924] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:41:40.621011  589123 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:41:40.621109  589123 notify.go:221] Checking for updates...
	I1213 11:41:40.627428  589123 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:41:40.630647  589123 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:41:40.633653  589123 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 11:41:40.636743  589123 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:41:40.639875  589123 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:41:40.643470  589123 config.go:182] Loaded profile config "no-preload-333352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:41:40.643591  589123 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:41:40.680100  589123 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:41:40.680226  589123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:41:40.756616  589123 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:41:40.747142182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:41:40.756723  589123 docker.go:319] overlay module found
	I1213 11:41:40.759961  589123 out.go:179] * Using the docker driver based on user configuration
	I1213 11:41:40.762776  589123 start.go:309] selected driver: docker
	I1213 11:41:40.762800  589123 start.go:927] validating driver "docker" against <nil>
	I1213 11:41:40.762814  589123 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:41:40.763539  589123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:41:40.821660  589123 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:41:40.812604764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:41:40.821819  589123 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 11:41:40.821853  589123 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 11:41:40.822076  589123 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 11:41:40.824951  589123 out.go:179] * Using Docker driver with root privileges
	I1213 11:41:40.827804  589123 cni.go:84] Creating CNI manager for ""
	I1213 11:41:40.827876  589123 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:41:40.827892  589123 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 11:41:40.827980  589123 start.go:353] cluster config:
	{Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:41:40.831095  589123 out.go:179] * Starting "newest-cni-796924" primary control-plane node in "newest-cni-796924" cluster
	I1213 11:41:40.833926  589123 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 11:41:40.836836  589123 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:41:40.839602  589123 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:41:40.839653  589123 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 11:41:40.839668  589123 cache.go:65] Caching tarball of preloaded images
	I1213 11:41:40.839677  589123 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:41:40.839751  589123 preload.go:238] Found /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 11:41:40.839761  589123 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 11:41:40.839868  589123 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/config.json ...
	I1213 11:41:40.839885  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/config.json: {Name:mk0ce282ac2d53ca7f0abb05f9aee384330b83fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:40.859227  589123 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:41:40.859251  589123 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:41:40.859271  589123 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:41:40.859304  589123 start.go:360] acquireMachinesLock for newest-cni-796924: {Name:mkb23dc851632c47983afd0f3cb215d071a4c6d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:41:40.859430  589123 start.go:364] duration metric: took 105.773µs to acquireMachinesLock for "newest-cni-796924"
	I1213 11:41:40.859462  589123 start.go:93] Provisioning new machine with config: &{Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 11:41:40.859540  589123 start.go:125] createHost starting for "" (driver="docker")
	I1213 11:41:40.862994  589123 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 11:41:40.863248  589123 start.go:159] libmachine.API.Create for "newest-cni-796924" (driver="docker")
	I1213 11:41:40.863284  589123 client.go:173] LocalClient.Create starting
	I1213 11:41:40.863374  589123 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem
	I1213 11:41:40.863413  589123 main.go:143] libmachine: Decoding PEM data...
	I1213 11:41:40.863433  589123 main.go:143] libmachine: Parsing certificate...
	I1213 11:41:40.863487  589123 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem
	I1213 11:41:40.863508  589123 main.go:143] libmachine: Decoding PEM data...
	I1213 11:41:40.863527  589123 main.go:143] libmachine: Parsing certificate...
	I1213 11:41:40.863921  589123 cli_runner.go:164] Run: docker network inspect newest-cni-796924 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 11:41:40.879900  589123 cli_runner.go:211] docker network inspect newest-cni-796924 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 11:41:40.879981  589123 network_create.go:284] running [docker network inspect newest-cni-796924] to gather additional debugging logs...
	I1213 11:41:40.880002  589123 cli_runner.go:164] Run: docker network inspect newest-cni-796924
	W1213 11:41:40.894997  589123 cli_runner.go:211] docker network inspect newest-cni-796924 returned with exit code 1
	I1213 11:41:40.895050  589123 network_create.go:287] error running [docker network inspect newest-cni-796924]: docker network inspect newest-cni-796924: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-796924 not found
	I1213 11:41:40.895065  589123 network_create.go:289] output of [docker network inspect newest-cni-796924]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-796924 not found
	
	** /stderr **
	I1213 11:41:40.895186  589123 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:41:40.912767  589123 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-381e4ce3c9ab IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:2d:23:57:0e:cc} reservation:<nil>}
	I1213 11:41:40.913250  589123 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bd1082d121b0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7a:42:ce:41:ea:ae} reservation:<nil>}
	I1213 11:41:40.913761  589123 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ebeb7162e340 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:cf:aa:41:ac:19} reservation:<nil>}
	I1213 11:41:40.914391  589123 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019db560}
	I1213 11:41:40.914425  589123 network_create.go:124] attempt to create docker network newest-cni-796924 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 11:41:40.914493  589123 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-796924 newest-cni-796924
	I1213 11:41:40.975597  589123 network_create.go:108] docker network newest-cni-796924 192.168.76.0/24 created
	I1213 11:41:40.975632  589123 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-796924" container
	I1213 11:41:40.975710  589123 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 11:41:40.992098  589123 cli_runner.go:164] Run: docker volume create newest-cni-796924 --label name.minikube.sigs.k8s.io=newest-cni-796924 --label created_by.minikube.sigs.k8s.io=true
	I1213 11:41:41.011676  589123 oci.go:103] Successfully created a docker volume newest-cni-796924
	I1213 11:41:41.011779  589123 cli_runner.go:164] Run: docker run --rm --name newest-cni-796924-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-796924 --entrypoint /usr/bin/test -v newest-cni-796924:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 11:41:41.562335  589123 oci.go:107] Successfully prepared a docker volume newest-cni-796924
	I1213 11:41:41.562406  589123 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:41:41.562420  589123 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 11:41:41.562520  589123 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-796924:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 11:41:45.483539  589123 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-796924:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.920978562s)
	I1213 11:41:45.483577  589123 kic.go:203] duration metric: took 3.921153184s to extract preloaded images to volume ...
	W1213 11:41:45.483725  589123 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 11:41:45.483849  589123 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 11:41:45.544786  589123 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-796924 --name newest-cni-796924 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-796924 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-796924 --network newest-cni-796924 --ip 192.168.76.2 --volume newest-cni-796924:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 11:41:45.837300  589123 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Running}}
	I1213 11:41:45.859478  589123 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:41:45.883282  589123 cli_runner.go:164] Run: docker exec newest-cni-796924 stat /var/lib/dpkg/alternatives/iptables
	I1213 11:41:45.939942  589123 oci.go:144] the created container "newest-cni-796924" has a running status.
	I1213 11:41:45.939979  589123 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa...
	I1213 11:41:46.475943  589123 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 11:41:46.497112  589123 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:41:46.514893  589123 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 11:41:46.514914  589123 kic_runner.go:114] Args: [docker exec --privileged newest-cni-796924 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 11:41:46.555872  589123 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:41:46.582882  589123 machine.go:94] provisionDockerMachine start ...
	I1213 11:41:46.583000  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:46.601258  589123 main.go:143] libmachine: Using SSH client type: native
	I1213 11:41:46.601613  589123 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1213 11:41:46.601628  589123 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:41:46.602167  589123 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43966->127.0.0.1:33430: read: connection reset by peer
	I1213 11:41:49.762525  589123 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-796924
	
	I1213 11:41:49.762610  589123 ubuntu.go:182] provisioning hostname "newest-cni-796924"
	I1213 11:41:49.762751  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:49.782841  589123 main.go:143] libmachine: Using SSH client type: native
	I1213 11:41:49.783298  589123 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1213 11:41:49.783328  589123 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-796924 && echo "newest-cni-796924" | sudo tee /etc/hostname
	I1213 11:41:49.948352  589123 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-796924
	
	I1213 11:41:49.948435  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:49.965977  589123 main.go:143] libmachine: Using SSH client type: native
	I1213 11:41:49.966316  589123 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1213 11:41:49.966341  589123 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-796924' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-796924/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-796924' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:41:50.147128  589123 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:41:50.147171  589123 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 11:41:50.147219  589123 ubuntu.go:190] setting up certificates
	I1213 11:41:50.147230  589123 provision.go:84] configureAuth start
	I1213 11:41:50.147297  589123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:41:50.165702  589123 provision.go:143] copyHostCerts
	I1213 11:41:50.165784  589123 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 11:41:50.165802  589123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 11:41:50.165914  589123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 11:41:50.166068  589123 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 11:41:50.166080  589123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 11:41:50.166123  589123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 11:41:50.166210  589123 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 11:41:50.166226  589123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 11:41:50.166257  589123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 11:41:50.166335  589123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.newest-cni-796924 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-796924]
	I1213 11:41:50.575993  589123 provision.go:177] copyRemoteCerts
	I1213 11:41:50.576089  589123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:41:50.576156  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:50.593521  589123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:41:50.702596  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 11:41:50.720289  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:41:50.738001  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 11:41:50.755303  589123 provision.go:87] duration metric: took 608.049982ms to configureAuth
	I1213 11:41:50.755333  589123 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:41:50.755533  589123 config.go:182] Loaded profile config "newest-cni-796924": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:41:50.755547  589123 machine.go:97] duration metric: took 4.172642608s to provisionDockerMachine
	I1213 11:41:50.755555  589123 client.go:176] duration metric: took 9.892260099s to LocalClient.Create
	I1213 11:41:50.755575  589123 start.go:167] duration metric: took 9.892327365s to libmachine.API.Create "newest-cni-796924"
	I1213 11:41:50.755586  589123 start.go:293] postStartSetup for "newest-cni-796924" (driver="docker")
	I1213 11:41:50.755596  589123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:41:50.755647  589123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:41:50.755689  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:50.772594  589123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:41:50.878962  589123 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:41:50.882465  589123 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:41:50.882496  589123 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:41:50.882513  589123 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 11:41:50.882569  589123 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 11:41:50.882649  589123 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 11:41:50.882784  589123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:41:50.890136  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:41:50.909142  589123 start.go:296] duration metric: took 153.541145ms for postStartSetup
	I1213 11:41:50.909520  589123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:41:50.926272  589123 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/config.json ...
	I1213 11:41:50.926557  589123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:41:50.926615  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:50.943196  589123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:41:51.043825  589123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:41:51.048871  589123 start.go:128] duration metric: took 10.189316484s to createHost
	I1213 11:41:51.048902  589123 start.go:83] releasing machines lock for "newest-cni-796924", held for 10.189458492s
	I1213 11:41:51.048990  589123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:41:51.066001  589123 ssh_runner.go:195] Run: cat /version.json
	I1213 11:41:51.066070  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:51.066359  589123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:41:51.066428  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:51.089473  589123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:41:51.096259  589123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:41:51.199327  589123 ssh_runner.go:195] Run: systemctl --version
	I1213 11:41:51.296961  589123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:41:51.301350  589123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:41:51.301424  589123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:41:51.333583  589123 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 11:41:51.333658  589123 start.go:496] detecting cgroup driver to use...
	I1213 11:41:51.333709  589123 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:41:51.333790  589123 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:41:51.348970  589123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:41:51.362099  589123 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:41:51.362222  589123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:41:51.379694  589123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:41:51.398786  589123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:41:51.510657  589123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:41:51.629156  589123 docker.go:234] disabling docker service ...
	I1213 11:41:51.629223  589123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:41:51.650731  589123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:41:51.664169  589123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:41:51.793148  589123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:41:51.904796  589123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:41:51.919458  589123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:41:51.942455  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 11:41:51.956281  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:41:51.965941  589123 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:41:51.966013  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:41:51.977493  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:41:51.987404  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:41:52.000948  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:41:52.013279  589123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:41:52.023039  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:41:52.032853  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:41:52.042519  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:41:52.052346  589123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:41:52.060125  589123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:41:52.068281  589123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:41:52.179247  589123 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:41:52.320321  589123 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 11:41:52.320429  589123 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 11:41:52.324439  589123 start.go:564] Will wait 60s for crictl version
	I1213 11:41:52.324501  589123 ssh_runner.go:195] Run: which crictl
	I1213 11:41:52.328708  589123 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:41:52.357589  589123 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 11:41:52.357683  589123 ssh_runner.go:195] Run: containerd --version
	I1213 11:41:52.383274  589123 ssh_runner.go:195] Run: containerd --version
	I1213 11:41:52.413360  589123 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 11:41:52.416432  589123 cli_runner.go:164] Run: docker network inspect newest-cni-796924 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:41:52.432557  589123 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 11:41:52.436286  589123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:41:52.449106  589123 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 11:41:52.452071  589123 kubeadm.go:884] updating cluster {Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:41:52.452217  589123 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:41:52.452308  589123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:41:52.477318  589123 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 11:41:52.477343  589123 containerd.go:534] Images already preloaded, skipping extraction
	I1213 11:41:52.477404  589123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:41:52.505926  589123 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 11:41:52.505953  589123 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:41:52.505961  589123 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 11:41:52.506065  589123 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-796924 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:41:52.506135  589123 ssh_runner.go:195] Run: sudo crictl info
	I1213 11:41:52.531708  589123 cni.go:84] Creating CNI manager for ""
	I1213 11:41:52.531733  589123 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:41:52.531753  589123 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 11:41:52.531776  589123 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-796924 NodeName:newest-cni-796924 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:41:52.531907  589123 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-796924"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:41:52.531983  589123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:41:52.540473  589123 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:41:52.540571  589123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:41:52.548635  589123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 11:41:52.562445  589123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:41:52.579341  589123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 11:41:52.593144  589123 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:41:52.596805  589123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:41:52.607006  589123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:41:52.727771  589123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:41:52.751356  589123 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924 for IP: 192.168.76.2
	I1213 11:41:52.751381  589123 certs.go:195] generating shared ca certs ...
	I1213 11:41:52.751399  589123 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:52.751547  589123 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 11:41:52.751597  589123 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 11:41:52.751607  589123 certs.go:257] generating profile certs ...
	I1213 11:41:52.751662  589123 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.key
	I1213 11:41:52.751679  589123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.crt with IP's: []
	I1213 11:41:53.086363  589123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.crt ...
	I1213 11:41:53.086398  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.crt: {Name:mk66b963bdd54f4b935fe2fc7acd97dde553339b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.086603  589123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.key ...
	I1213 11:41:53.086620  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.key: {Name:mk98638456845d9072484c2ea9cf4343d6af1634 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.086739  589123 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key.ced45374
	I1213 11:41:53.086760  589123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt.ced45374 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1213 11:41:53.240504  589123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt.ced45374 ...
	I1213 11:41:53.240537  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt.ced45374: {Name:mkabd19bc7e960d2c555d82ddd752e663c8f6cd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.240708  589123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key.ced45374 ...
	I1213 11:41:53.240722  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key.ced45374: {Name:mk3e4fdd1c06bfd329cc4a39da890d8da6317b83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.240816  589123 certs.go:382] copying /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt.ced45374 -> /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt
	I1213 11:41:53.240898  589123 certs.go:386] copying /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key.ced45374 -> /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key
	I1213 11:41:53.240954  589123 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key
	I1213 11:41:53.240973  589123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.crt with IP's: []
	I1213 11:41:53.471880  589123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.crt ...
	I1213 11:41:53.471916  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.crt: {Name:mkc7d686a714b0dc00954cf052cbfbc601a1b715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.472127  589123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key ...
	I1213 11:41:53.472146  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key: {Name:mk3b8f645a4e8504ec9bd2eed45071861029af54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.472358  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 11:41:53.472406  589123 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 11:41:53.472425  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:41:53.472456  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:41:53.472484  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:41:53.472514  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 11:41:53.472565  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:41:53.473197  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:41:53.497619  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:41:53.520852  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:41:53.540142  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:41:53.558660  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 11:41:53.576992  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:41:53.595267  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:41:53.613794  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:41:53.632148  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 11:41:53.650879  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:41:53.676104  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 11:41:53.697746  589123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:41:53.714070  589123 ssh_runner.go:195] Run: openssl version
	I1213 11:41:53.722093  589123 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 11:41:53.730578  589123 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 11:41:53.738421  589123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 11:41:53.742437  589123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 11:41:53.742533  589123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 11:41:53.786290  589123 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:41:53.794147  589123 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3089152.pem /etc/ssl/certs/3ec20f2e.0
	I1213 11:41:53.802168  589123 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:41:53.809707  589123 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:41:53.817606  589123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:41:53.821659  589123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:41:53.821728  589123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:41:53.863050  589123 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:41:53.870933  589123 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 11:41:53.878551  589123 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 11:41:53.886061  589123 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 11:41:53.893998  589123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 11:41:53.898172  589123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 11:41:53.898239  589123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 11:41:53.939253  589123 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:41:53.946895  589123 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/308915.pem /etc/ssl/certs/51391683.0
	I1213 11:41:53.954761  589123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:41:53.958396  589123 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 11:41:53.958494  589123 kubeadm.go:401] StartCluster: {Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:41:53.958606  589123 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 11:41:53.958783  589123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:41:53.989529  589123 cri.go:89] found id: ""
	I1213 11:41:53.989604  589123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:41:53.998028  589123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:41:54.008661  589123 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:41:54.008741  589123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:41:54.018340  589123 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:41:54.018369  589123 kubeadm.go:158] found existing configuration files:
	
	I1213 11:41:54.018431  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:41:54.027307  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:41:54.027393  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:41:54.036120  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:41:54.044655  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:41:54.044733  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:41:54.053302  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:41:54.061899  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:41:54.061991  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:41:54.070981  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:41:54.079553  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:41:54.079622  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:41:54.087398  589123 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:41:54.133199  589123 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:41:54.133519  589123 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:41:54.230705  589123 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:41:54.230782  589123 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:41:54.230824  589123 kubeadm.go:319] OS: Linux
	I1213 11:41:54.230875  589123 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:41:54.230929  589123 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:41:54.230979  589123 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:41:54.231032  589123 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:41:54.231083  589123 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:41:54.231135  589123 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:41:54.231184  589123 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:41:54.231236  589123 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:41:54.231285  589123 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:41:54.298600  589123 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:41:54.298731  589123 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:41:54.298837  589123 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:41:54.307104  589123 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:41:54.313608  589123 out.go:252]   - Generating certificates and keys ...
	I1213 11:41:54.313778  589123 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:41:54.313890  589123 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:41:54.510481  589123 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 11:41:54.575310  589123 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 11:41:54.686709  589123 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 11:41:54.914237  589123 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 11:41:55.329374  589123 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 11:41:55.329538  589123 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-796924] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 11:41:55.443297  589123 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 11:41:55.443660  589123 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-796924] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 11:41:55.929252  589123 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 11:41:56.099892  589123 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 11:41:56.662486  589123 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 11:41:56.662923  589123 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:41:56.728098  589123 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:41:56.987601  589123 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:41:57.419088  589123 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:41:57.640413  589123 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:41:58.149864  589123 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:41:58.150638  589123 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:41:58.153451  589123 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:41:58.157369  589123 out.go:252]   - Booting up control plane ...
	I1213 11:41:58.157498  589123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:41:58.157584  589123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:41:58.157650  589123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:41:58.194714  589123 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:41:58.194860  589123 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:41:58.202073  589123 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:41:58.202506  589123 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:41:58.202564  589123 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:41:58.355362  589123 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:41:58.355487  589123 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:45:15.897209  568526 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000248006s
	I1213 11:45:15.897238  568526 kubeadm.go:319] 
	I1213 11:45:15.897296  568526 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:45:15.897329  568526 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:45:15.897444  568526 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:45:15.897460  568526 kubeadm.go:319] 
	I1213 11:45:15.897565  568526 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:45:15.897602  568526 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:45:15.897639  568526 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:45:15.897647  568526 kubeadm.go:319] 
	I1213 11:45:15.901779  568526 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:45:15.902208  568526 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:45:15.902322  568526 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:45:15.902560  568526 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:45:15.902570  568526 kubeadm.go:319] 
	I1213 11:45:15.902639  568526 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 11:45:15.902721  568526 kubeadm.go:403] duration metric: took 8m6.858200115s to StartCluster
	I1213 11:45:15.902773  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:45:15.902832  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:45:15.928207  568526 cri.go:89] found id: ""
	I1213 11:45:15.928245  568526 logs.go:282] 0 containers: []
	W1213 11:45:15.928254  568526 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:45:15.928262  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:45:15.928320  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:45:15.954471  568526 cri.go:89] found id: ""
	I1213 11:45:15.954508  568526 logs.go:282] 0 containers: []
	W1213 11:45:15.954520  568526 logs.go:284] No container was found matching "etcd"
	I1213 11:45:15.954531  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:45:15.954610  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:45:15.979197  568526 cri.go:89] found id: ""
	I1213 11:45:15.979226  568526 logs.go:282] 0 containers: []
	W1213 11:45:15.979236  568526 logs.go:284] No container was found matching "coredns"
	I1213 11:45:15.979243  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:45:15.979315  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:45:16.008075  568526 cri.go:89] found id: ""
	I1213 11:45:16.008098  568526 logs.go:282] 0 containers: []
	W1213 11:45:16.008107  568526 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:45:16.008118  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:45:16.008193  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:45:16.034169  568526 cri.go:89] found id: ""
	I1213 11:45:16.034191  568526 logs.go:282] 0 containers: []
	W1213 11:45:16.034199  568526 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:45:16.034207  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:45:16.034265  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:45:16.058826  568526 cri.go:89] found id: ""
	I1213 11:45:16.058854  568526 logs.go:282] 0 containers: []
	W1213 11:45:16.058862  568526 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:45:16.058869  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:45:16.058928  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:45:16.083123  568526 cri.go:89] found id: ""
	I1213 11:45:16.083151  568526 logs.go:282] 0 containers: []
	W1213 11:45:16.083160  568526 logs.go:284] No container was found matching "kindnet"
	I1213 11:45:16.083171  568526 logs.go:123] Gathering logs for dmesg ...
	I1213 11:45:16.083184  568526 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:45:16.100676  568526 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:45:16.100707  568526 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:45:16.166022  568526 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:45:16.157896    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.158620    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.160237    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.160701    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.162431    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:45:16.157896    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.158620    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.160237    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.160701    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.162431    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:45:16.166047  568526 logs.go:123] Gathering logs for containerd ...
	I1213 11:45:16.166060  568526 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:45:16.207842  568526 logs.go:123] Gathering logs for container status ...
	I1213 11:45:16.207880  568526 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:45:16.235350  568526 logs.go:123] Gathering logs for kubelet ...
	I1213 11:45:16.235376  568526 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 11:45:16.294386  568526 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248006s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 11:45:16.294456  568526 out.go:285] * 
	W1213 11:45:16.294516  568526 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248006s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:45:16.294544  568526 out.go:285] * 
	W1213 11:45:16.296685  568526 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:45:16.302469  568526 out.go:203] 
	W1213 11:45:16.306292  568526 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248006s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:45:16.306365  568526 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 11:45:16.306395  568526 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 11:45:16.309964  568526 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 11:36:56 no-preload-333352 containerd[759]: time="2025-12-13T11:36:56.089357963Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:36:57 no-preload-333352 containerd[759]: time="2025-12-13T11:36:57.511622237Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 13 11:36:57 no-preload-333352 containerd[759]: time="2025-12-13T11:36:57.516065454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 13 11:36:57 no-preload-333352 containerd[759]: time="2025-12-13T11:36:57.531326305Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:36:57 no-preload-333352 containerd[759]: time="2025-12-13T11:36:57.531822769Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:36:58 no-preload-333352 containerd[759]: time="2025-12-13T11:36:58.968197722Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 13 11:36:58 no-preload-333352 containerd[759]: time="2025-12-13T11:36:58.971116854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 13 11:36:58 no-preload-333352 containerd[759]: time="2025-12-13T11:36:58.980545274Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:36:58 no-preload-333352 containerd[759]: time="2025-12-13T11:36:58.981362816Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:02 no-preload-333352 containerd[759]: time="2025-12-13T11:37:02.214365383Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 13 11:37:02 no-preload-333352 containerd[759]: time="2025-12-13T11:37:02.217628084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 13 11:37:02 no-preload-333352 containerd[759]: time="2025-12-13T11:37:02.241331613Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:02 no-preload-333352 containerd[759]: time="2025-12-13T11:37:02.242087346Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:03 no-preload-333352 containerd[759]: time="2025-12-13T11:37:03.217262130Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 13 11:37:03 no-preload-333352 containerd[759]: time="2025-12-13T11:37:03.220012055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 13 11:37:03 no-preload-333352 containerd[759]: time="2025-12-13T11:37:03.228672623Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:03 no-preload-333352 containerd[759]: time="2025-12-13T11:37:03.229475338Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:04 no-preload-333352 containerd[759]: time="2025-12-13T11:37:04.630418294Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 13 11:37:04 no-preload-333352 containerd[759]: time="2025-12-13T11:37:04.633143086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 13 11:37:04 no-preload-333352 containerd[759]: time="2025-12-13T11:37:04.641567747Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:04 no-preload-333352 containerd[759]: time="2025-12-13T11:37:04.642255121Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:04 no-preload-333352 containerd[759]: time="2025-12-13T11:37:04.996924296Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 13 11:37:05 no-preload-333352 containerd[759]: time="2025-12-13T11:37:05.004833973Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 13 11:37:05 no-preload-333352 containerd[759]: time="2025-12-13T11:37:05.013913352Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:05 no-preload-333352 containerd[759]: time="2025-12-13T11:37:05.014372006Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:45:17.453523    5544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:17.454264    5544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:17.455984    5544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:17.456500    5544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:17.458169    5544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +12.907693] overlayfs: idmapped layers are currently not supported
	[Dec13 09:25] overlayfs: idmapped layers are currently not supported
	[ +26.192425] overlayfs: idmapped layers are currently not supported
	[Dec13 09:26] overlayfs: idmapped layers are currently not supported
	[ +25.729788] overlayfs: idmapped layers are currently not supported
	[Dec13 09:27] overlayfs: idmapped layers are currently not supported
	[Dec13 09:28] overlayfs: idmapped layers are currently not supported
	[Dec13 09:31] overlayfs: idmapped layers are currently not supported
	[Dec13 09:32] overlayfs: idmapped layers are currently not supported
	[ +17.979093] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec13 09:43] overlayfs: idmapped layers are currently not supported
	[Dec13 09:45] overlayfs: idmapped layers are currently not supported
	[ +25.885102] overlayfs: idmapped layers are currently not supported
	[Dec13 09:46] overlayfs: idmapped layers are currently not supported
	[ +22.078149] overlayfs: idmapped layers are currently not supported
	[Dec13 09:47] overlayfs: idmapped layers are currently not supported
	[Dec13 09:48] overlayfs: idmapped layers are currently not supported
	[Dec13 09:49] overlayfs: idmapped layers are currently not supported
	[Dec13 09:51] overlayfs: idmapped layers are currently not supported
	[ +17.043564] overlayfs: idmapped layers are currently not supported
	[Dec13 09:52] overlayfs: idmapped layers are currently not supported
	[Dec13 09:53] overlayfs: idmapped layers are currently not supported
	[Dec13 10:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:19] hrtimer: interrupt took 21247146 ns
	
	
	==> kernel <==
	 11:45:17 up  4:27,  0 user,  load average: 0.97, 1.38, 1.88
	Linux no-preload-333352 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 11:45:14 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:45:14 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 13 11:45:14 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:45:14 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:45:14 no-preload-333352 kubelet[5349]: E1213 11:45:14.979377    5349 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:45:14 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:45:14 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:45:15 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 13 11:45:15 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:45:15 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:45:15 no-preload-333352 kubelet[5354]: E1213 11:45:15.729071    5354 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:45:15 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:45:15 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:45:16 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 13 11:45:16 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:45:16 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:45:16 no-preload-333352 kubelet[5440]: E1213 11:45:16.527761    5440 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:45:16 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:45:16 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:45:17 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 13 11:45:17 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:45:17 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:45:17 no-preload-333352 kubelet[5489]: E1213 11:45:17.237061    5489 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:45:17 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:45:17 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-333352 -n no-preload-333352
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-333352 -n no-preload-333352: exit status 6 (339.213422ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 11:45:17.901046  594271 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-333352" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-333352" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (514.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (502.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-796924 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1213 11:41:48.079993  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:42:52.068054  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/old-k8s-version-624185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:43:15.326286  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:44:30.436631  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:44:47.365041  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:45:08.209279  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/old-k8s-version-624185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:45:12.240800  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:45:12.403444  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/default-k8s-diff-port-191845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:45:12.409981  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/default-k8s-diff-port-191845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:45:12.421444  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/default-k8s-diff-port-191845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:45:12.442957  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/default-k8s-diff-port-191845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:45:12.484451  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/default-k8s-diff-port-191845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:45:12.565927  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/default-k8s-diff-port-191845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:45:12.727396  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/default-k8s-diff-port-191845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:45:13.049280  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/default-k8s-diff-port-191845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:45:13.691511  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/default-k8s-diff-port-191845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:45:14.972850  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/default-k8s-diff-port-191845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-796924 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m20.584903495s)

                                                
                                                
-- stdout --
	* [newest-cni-796924] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "newest-cni-796924" primary control-plane node in "newest-cni-796924" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:41:40.611522  589123 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:41:40.611651  589123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:41:40.611663  589123 out.go:374] Setting ErrFile to fd 2...
	I1213 11:41:40.611668  589123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:41:40.611912  589123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 11:41:40.612341  589123 out.go:368] Setting JSON to false
	I1213 11:41:40.613214  589123 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":15853,"bootTime":1765610247,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 11:41:40.613281  589123 start.go:143] virtualization:  
	I1213 11:41:40.617550  589123 out.go:179] * [newest-cni-796924] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:41:40.621011  589123 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:41:40.621109  589123 notify.go:221] Checking for updates...
	I1213 11:41:40.627428  589123 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:41:40.630647  589123 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:41:40.633653  589123 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 11:41:40.636743  589123 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:41:40.639875  589123 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:41:40.643470  589123 config.go:182] Loaded profile config "no-preload-333352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:41:40.643591  589123 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:41:40.680100  589123 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:41:40.680226  589123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:41:40.756616  589123 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:41:40.747142182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:41:40.756723  589123 docker.go:319] overlay module found
	I1213 11:41:40.759961  589123 out.go:179] * Using the docker driver based on user configuration
	I1213 11:41:40.762776  589123 start.go:309] selected driver: docker
	I1213 11:41:40.762800  589123 start.go:927] validating driver "docker" against <nil>
	I1213 11:41:40.762814  589123 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:41:40.763539  589123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:41:40.821660  589123 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:41:40.812604764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:41:40.821819  589123 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 11:41:40.821853  589123 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 11:41:40.822076  589123 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 11:41:40.824951  589123 out.go:179] * Using Docker driver with root privileges
	I1213 11:41:40.827804  589123 cni.go:84] Creating CNI manager for ""
	I1213 11:41:40.827876  589123 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:41:40.827892  589123 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 11:41:40.827980  589123 start.go:353] cluster config:
	{Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:41:40.831095  589123 out.go:179] * Starting "newest-cni-796924" primary control-plane node in "newest-cni-796924" cluster
	I1213 11:41:40.833926  589123 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 11:41:40.836836  589123 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:41:40.839602  589123 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:41:40.839653  589123 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 11:41:40.839668  589123 cache.go:65] Caching tarball of preloaded images
	I1213 11:41:40.839677  589123 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:41:40.839751  589123 preload.go:238] Found /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 11:41:40.839761  589123 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 11:41:40.839868  589123 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/config.json ...
	I1213 11:41:40.839885  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/config.json: {Name:mk0ce282ac2d53ca7f0abb05f9aee384330b83fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:40.859227  589123 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:41:40.859251  589123 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:41:40.859271  589123 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:41:40.859304  589123 start.go:360] acquireMachinesLock for newest-cni-796924: {Name:mkb23dc851632c47983afd0f3cb215d071a4c6d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:41:40.859430  589123 start.go:364] duration metric: took 105.773µs to acquireMachinesLock for "newest-cni-796924"
	I1213 11:41:40.859462  589123 start.go:93] Provisioning new machine with config: &{Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 11:41:40.859540  589123 start.go:125] createHost starting for "" (driver="docker")
	I1213 11:41:40.862994  589123 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 11:41:40.863248  589123 start.go:159] libmachine.API.Create for "newest-cni-796924" (driver="docker")
	I1213 11:41:40.863284  589123 client.go:173] LocalClient.Create starting
	I1213 11:41:40.863374  589123 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem
	I1213 11:41:40.863413  589123 main.go:143] libmachine: Decoding PEM data...
	I1213 11:41:40.863433  589123 main.go:143] libmachine: Parsing certificate...
	I1213 11:41:40.863487  589123 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem
	I1213 11:41:40.863508  589123 main.go:143] libmachine: Decoding PEM data...
	I1213 11:41:40.863527  589123 main.go:143] libmachine: Parsing certificate...
	I1213 11:41:40.863921  589123 cli_runner.go:164] Run: docker network inspect newest-cni-796924 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 11:41:40.879900  589123 cli_runner.go:211] docker network inspect newest-cni-796924 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 11:41:40.879981  589123 network_create.go:284] running [docker network inspect newest-cni-796924] to gather additional debugging logs...
	I1213 11:41:40.880002  589123 cli_runner.go:164] Run: docker network inspect newest-cni-796924
	W1213 11:41:40.894997  589123 cli_runner.go:211] docker network inspect newest-cni-796924 returned with exit code 1
	I1213 11:41:40.895050  589123 network_create.go:287] error running [docker network inspect newest-cni-796924]: docker network inspect newest-cni-796924: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-796924 not found
	I1213 11:41:40.895065  589123 network_create.go:289] output of [docker network inspect newest-cni-796924]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-796924 not found
	
	** /stderr **
	I1213 11:41:40.895186  589123 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:41:40.912767  589123 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-381e4ce3c9ab IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:2d:23:57:0e:cc} reservation:<nil>}
	I1213 11:41:40.913250  589123 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bd1082d121b0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7a:42:ce:41:ea:ae} reservation:<nil>}
	I1213 11:41:40.913761  589123 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ebeb7162e340 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:cf:aa:41:ac:19} reservation:<nil>}
	I1213 11:41:40.914391  589123 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019db560}
	I1213 11:41:40.914425  589123 network_create.go:124] attempt to create docker network newest-cni-796924 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 11:41:40.914493  589123 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-796924 newest-cni-796924
	I1213 11:41:40.975597  589123 network_create.go:108] docker network newest-cni-796924 192.168.76.0/24 created
	I1213 11:41:40.975632  589123 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-796924" container
	I1213 11:41:40.975710  589123 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 11:41:40.992098  589123 cli_runner.go:164] Run: docker volume create newest-cni-796924 --label name.minikube.sigs.k8s.io=newest-cni-796924 --label created_by.minikube.sigs.k8s.io=true
	I1213 11:41:41.011676  589123 oci.go:103] Successfully created a docker volume newest-cni-796924
	I1213 11:41:41.011779  589123 cli_runner.go:164] Run: docker run --rm --name newest-cni-796924-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-796924 --entrypoint /usr/bin/test -v newest-cni-796924:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 11:41:41.562335  589123 oci.go:107] Successfully prepared a docker volume newest-cni-796924
	I1213 11:41:41.562406  589123 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:41:41.562420  589123 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 11:41:41.562520  589123 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-796924:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 11:41:45.483539  589123 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-796924:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.920978562s)
	I1213 11:41:45.483577  589123 kic.go:203] duration metric: took 3.921153184s to extract preloaded images to volume ...
	W1213 11:41:45.483725  589123 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 11:41:45.483849  589123 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 11:41:45.544786  589123 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-796924 --name newest-cni-796924 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-796924 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-796924 --network newest-cni-796924 --ip 192.168.76.2 --volume newest-cni-796924:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 11:41:45.837300  589123 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Running}}
	I1213 11:41:45.859478  589123 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:41:45.883282  589123 cli_runner.go:164] Run: docker exec newest-cni-796924 stat /var/lib/dpkg/alternatives/iptables
	I1213 11:41:45.939942  589123 oci.go:144] the created container "newest-cni-796924" has a running status.
	I1213 11:41:45.939979  589123 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa...
	I1213 11:41:46.475943  589123 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 11:41:46.497112  589123 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:41:46.514893  589123 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 11:41:46.514914  589123 kic_runner.go:114] Args: [docker exec --privileged newest-cni-796924 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 11:41:46.555872  589123 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:41:46.582882  589123 machine.go:94] provisionDockerMachine start ...
	I1213 11:41:46.583000  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:46.601258  589123 main.go:143] libmachine: Using SSH client type: native
	I1213 11:41:46.601613  589123 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1213 11:41:46.601628  589123 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:41:46.602167  589123 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43966->127.0.0.1:33430: read: connection reset by peer
	I1213 11:41:49.762525  589123 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-796924
	
	I1213 11:41:49.762610  589123 ubuntu.go:182] provisioning hostname "newest-cni-796924"
	I1213 11:41:49.762751  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:49.782841  589123 main.go:143] libmachine: Using SSH client type: native
	I1213 11:41:49.783298  589123 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1213 11:41:49.783328  589123 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-796924 && echo "newest-cni-796924" | sudo tee /etc/hostname
	I1213 11:41:49.948352  589123 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-796924
	
	I1213 11:41:49.948435  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:49.965977  589123 main.go:143] libmachine: Using SSH client type: native
	I1213 11:41:49.966316  589123 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1213 11:41:49.966341  589123 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-796924' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-796924/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-796924' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:41:50.147128  589123 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:41:50.147171  589123 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 11:41:50.147219  589123 ubuntu.go:190] setting up certificates
	I1213 11:41:50.147230  589123 provision.go:84] configureAuth start
	I1213 11:41:50.147297  589123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:41:50.165702  589123 provision.go:143] copyHostCerts
	I1213 11:41:50.165784  589123 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 11:41:50.165802  589123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 11:41:50.165914  589123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 11:41:50.166068  589123 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 11:41:50.166080  589123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 11:41:50.166123  589123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 11:41:50.166210  589123 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 11:41:50.166226  589123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 11:41:50.166257  589123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 11:41:50.166335  589123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.newest-cni-796924 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-796924]
	I1213 11:41:50.575993  589123 provision.go:177] copyRemoteCerts
	I1213 11:41:50.576089  589123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:41:50.576156  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:50.593521  589123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:41:50.702596  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 11:41:50.720289  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:41:50.738001  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 11:41:50.755303  589123 provision.go:87] duration metric: took 608.049982ms to configureAuth
	I1213 11:41:50.755333  589123 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:41:50.755533  589123 config.go:182] Loaded profile config "newest-cni-796924": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:41:50.755547  589123 machine.go:97] duration metric: took 4.172642608s to provisionDockerMachine
	I1213 11:41:50.755555  589123 client.go:176] duration metric: took 9.892260099s to LocalClient.Create
	I1213 11:41:50.755575  589123 start.go:167] duration metric: took 9.892327365s to libmachine.API.Create "newest-cni-796924"
	I1213 11:41:50.755586  589123 start.go:293] postStartSetup for "newest-cni-796924" (driver="docker")
	I1213 11:41:50.755596  589123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:41:50.755647  589123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:41:50.755689  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:50.772594  589123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:41:50.878962  589123 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:41:50.882465  589123 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:41:50.882496  589123 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:41:50.882513  589123 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 11:41:50.882569  589123 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 11:41:50.882649  589123 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 11:41:50.882784  589123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:41:50.890136  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:41:50.909142  589123 start.go:296] duration metric: took 153.541145ms for postStartSetup
	I1213 11:41:50.909520  589123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:41:50.926272  589123 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/config.json ...
	I1213 11:41:50.926557  589123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:41:50.926615  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:50.943196  589123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:41:51.043825  589123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:41:51.048871  589123 start.go:128] duration metric: took 10.189316484s to createHost
	I1213 11:41:51.048902  589123 start.go:83] releasing machines lock for "newest-cni-796924", held for 10.189458492s
	I1213 11:41:51.048990  589123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:41:51.066001  589123 ssh_runner.go:195] Run: cat /version.json
	I1213 11:41:51.066070  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:51.066359  589123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:41:51.066428  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:51.089473  589123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:41:51.096259  589123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:41:51.199327  589123 ssh_runner.go:195] Run: systemctl --version
	I1213 11:41:51.296961  589123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:41:51.301350  589123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:41:51.301424  589123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:41:51.333583  589123 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 11:41:51.333658  589123 start.go:496] detecting cgroup driver to use...
	I1213 11:41:51.333709  589123 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:41:51.333790  589123 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:41:51.348970  589123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:41:51.362099  589123 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:41:51.362222  589123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:41:51.379694  589123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:41:51.398786  589123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:41:51.510657  589123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:41:51.629156  589123 docker.go:234] disabling docker service ...
	I1213 11:41:51.629223  589123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:41:51.650731  589123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:41:51.664169  589123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:41:51.793148  589123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:41:51.904796  589123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:41:51.919458  589123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:41:51.942455  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 11:41:51.956281  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:41:51.965941  589123 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:41:51.966013  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:41:51.977493  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:41:51.987404  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:41:52.000948  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:41:52.013279  589123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:41:52.023039  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:41:52.032853  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:41:52.042519  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:41:52.052346  589123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:41:52.060125  589123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:41:52.068281  589123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:41:52.179247  589123 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:41:52.320321  589123 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 11:41:52.320429  589123 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 11:41:52.324439  589123 start.go:564] Will wait 60s for crictl version
	I1213 11:41:52.324501  589123 ssh_runner.go:195] Run: which crictl
	I1213 11:41:52.328708  589123 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:41:52.357589  589123 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 11:41:52.357683  589123 ssh_runner.go:195] Run: containerd --version
	I1213 11:41:52.383274  589123 ssh_runner.go:195] Run: containerd --version
	I1213 11:41:52.413360  589123 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 11:41:52.416432  589123 cli_runner.go:164] Run: docker network inspect newest-cni-796924 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:41:52.432557  589123 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 11:41:52.436286  589123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:41:52.449106  589123 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 11:41:52.452071  589123 kubeadm.go:884] updating cluster {Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:41:52.452217  589123 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:41:52.452308  589123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:41:52.477318  589123 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 11:41:52.477343  589123 containerd.go:534] Images already preloaded, skipping extraction
	I1213 11:41:52.477404  589123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:41:52.505926  589123 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 11:41:52.505953  589123 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:41:52.505961  589123 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 11:41:52.506065  589123 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-796924 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:41:52.506135  589123 ssh_runner.go:195] Run: sudo crictl info
	I1213 11:41:52.531708  589123 cni.go:84] Creating CNI manager for ""
	I1213 11:41:52.531733  589123 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:41:52.531753  589123 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 11:41:52.531776  589123 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-796924 NodeName:newest-cni-796924 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:41:52.531907  589123 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-796924"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:41:52.531983  589123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:41:52.540473  589123 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:41:52.540571  589123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:41:52.548635  589123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 11:41:52.562445  589123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:41:52.579341  589123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 11:41:52.593144  589123 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:41:52.596805  589123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:41:52.607006  589123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:41:52.727771  589123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:41:52.751356  589123 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924 for IP: 192.168.76.2
	I1213 11:41:52.751381  589123 certs.go:195] generating shared ca certs ...
	I1213 11:41:52.751399  589123 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:52.751547  589123 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 11:41:52.751597  589123 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 11:41:52.751607  589123 certs.go:257] generating profile certs ...
	I1213 11:41:52.751662  589123 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.key
	I1213 11:41:52.751679  589123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.crt with IP's: []
	I1213 11:41:53.086363  589123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.crt ...
	I1213 11:41:53.086398  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.crt: {Name:mk66b963bdd54f4b935fe2fc7acd97dde553339b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.086603  589123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.key ...
	I1213 11:41:53.086620  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.key: {Name:mk98638456845d9072484c2ea9cf4343d6af1634 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.086739  589123 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key.ced45374
	I1213 11:41:53.086760  589123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt.ced45374 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1213 11:41:53.240504  589123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt.ced45374 ...
	I1213 11:41:53.240537  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt.ced45374: {Name:mkabd19bc7e960d2c555d82ddd752e663c8f6cd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.240708  589123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key.ced45374 ...
	I1213 11:41:53.240722  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key.ced45374: {Name:mk3e4fdd1c06bfd329cc4a39da890d8da6317b83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.240816  589123 certs.go:382] copying /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt.ced45374 -> /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt
	I1213 11:41:53.240898  589123 certs.go:386] copying /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key.ced45374 -> /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key
	I1213 11:41:53.240954  589123 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key
	I1213 11:41:53.240973  589123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.crt with IP's: []
	I1213 11:41:53.471880  589123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.crt ...
	I1213 11:41:53.471916  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.crt: {Name:mkc7d686a714b0dc00954cf052cbfbc601a1b715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.472127  589123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key ...
	I1213 11:41:53.472146  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key: {Name:mk3b8f645a4e8504ec9bd2eed45071861029af54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.472358  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 11:41:53.472406  589123 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 11:41:53.472425  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:41:53.472456  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:41:53.472484  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:41:53.472514  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 11:41:53.472565  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:41:53.473197  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:41:53.497619  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:41:53.520852  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:41:53.540142  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:41:53.558660  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 11:41:53.576992  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:41:53.595267  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:41:53.613794  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:41:53.632148  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 11:41:53.650879  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:41:53.676104  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 11:41:53.697746  589123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:41:53.714070  589123 ssh_runner.go:195] Run: openssl version
	I1213 11:41:53.722093  589123 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 11:41:53.730578  589123 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 11:41:53.738421  589123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 11:41:53.742437  589123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 11:41:53.742533  589123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 11:41:53.786290  589123 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:41:53.794147  589123 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3089152.pem /etc/ssl/certs/3ec20f2e.0
	I1213 11:41:53.802168  589123 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:41:53.809707  589123 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:41:53.817606  589123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:41:53.821659  589123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:41:53.821728  589123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:41:53.863050  589123 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:41:53.870933  589123 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 11:41:53.878551  589123 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 11:41:53.886061  589123 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 11:41:53.893998  589123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 11:41:53.898172  589123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 11:41:53.898239  589123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 11:41:53.939253  589123 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:41:53.946895  589123 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/308915.pem /etc/ssl/certs/51391683.0
	I1213 11:41:53.954761  589123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:41:53.958396  589123 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 11:41:53.958494  589123 kubeadm.go:401] StartCluster: {Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:41:53.958606  589123 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 11:41:53.958783  589123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:41:53.989529  589123 cri.go:89] found id: ""
	I1213 11:41:53.989604  589123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:41:53.998028  589123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:41:54.008661  589123 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:41:54.008741  589123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:41:54.018340  589123 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:41:54.018369  589123 kubeadm.go:158] found existing configuration files:
	
	I1213 11:41:54.018431  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:41:54.027307  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:41:54.027393  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:41:54.036120  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:41:54.044655  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:41:54.044733  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:41:54.053302  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:41:54.061899  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:41:54.061991  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:41:54.070981  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:41:54.079553  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:41:54.079622  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:41:54.087398  589123 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:41:54.133199  589123 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:41:54.133519  589123 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:41:54.230705  589123 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:41:54.230782  589123 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:41:54.230824  589123 kubeadm.go:319] OS: Linux
	I1213 11:41:54.230875  589123 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:41:54.230929  589123 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:41:54.230979  589123 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:41:54.231032  589123 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:41:54.231083  589123 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:41:54.231135  589123 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:41:54.231184  589123 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:41:54.231236  589123 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:41:54.231285  589123 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:41:54.298600  589123 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:41:54.298731  589123 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:41:54.298837  589123 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:41:54.307104  589123 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:41:54.313608  589123 out.go:252]   - Generating certificates and keys ...
	I1213 11:41:54.313778  589123 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:41:54.313890  589123 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:41:54.510481  589123 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 11:41:54.575310  589123 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 11:41:54.686709  589123 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 11:41:54.914237  589123 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 11:41:55.329374  589123 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 11:41:55.329538  589123 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-796924] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 11:41:55.443297  589123 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 11:41:55.443660  589123 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-796924] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 11:41:55.929252  589123 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 11:41:56.099892  589123 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 11:41:56.662486  589123 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 11:41:56.662923  589123 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:41:56.728098  589123 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:41:56.987601  589123 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:41:57.419088  589123 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:41:57.640413  589123 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:41:58.149864  589123 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:41:58.150638  589123 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:41:58.153451  589123 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:41:58.157369  589123 out.go:252]   - Booting up control plane ...
	I1213 11:41:58.157498  589123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:41:58.157584  589123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:41:58.157650  589123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:41:58.194714  589123 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:41:58.194860  589123 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:41:58.202073  589123 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:41:58.202506  589123 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:41:58.202564  589123 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:41:58.355362  589123 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:41:58.355487  589123 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:45:58.353148  589123 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000052585s
	I1213 11:45:58.353188  589123 kubeadm.go:319] 
	I1213 11:45:58.353442  589123 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:45:58.353506  589123 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:45:58.353695  589123 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:45:58.353705  589123 kubeadm.go:319] 
	I1213 11:45:58.354139  589123 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:45:58.354199  589123 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:45:58.354254  589123 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:45:58.354259  589123 kubeadm.go:319] 
	I1213 11:45:58.358851  589123 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:45:58.359414  589123 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:45:58.359545  589123 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:45:58.359830  589123 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 11:45:58.359839  589123 kubeadm.go:319] 
	I1213 11:45:58.359915  589123 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 11:45:58.360054  589123 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-796924] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-796924] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000052585s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-796924] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-796924] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000052585s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 11:45:58.360154  589123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 11:45:58.767471  589123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:45:58.780638  589123 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:45:58.780702  589123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:45:58.788623  589123 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:45:58.788646  589123 kubeadm.go:158] found existing configuration files:
	
	I1213 11:45:58.788724  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:45:58.796630  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:45:58.796706  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:45:58.804119  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:45:58.811956  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:45:58.812020  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:45:58.819661  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:45:58.827110  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:45:58.827171  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:45:58.834525  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:45:58.842305  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:45:58.842374  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:45:58.849891  589123 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:45:58.890505  589123 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:45:58.890564  589123 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:45:58.955820  589123 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:45:58.955899  589123 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:45:58.955941  589123 kubeadm.go:319] OS: Linux
	I1213 11:45:58.955989  589123 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:45:58.956040  589123 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:45:58.956091  589123 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:45:58.956143  589123 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:45:58.956193  589123 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:45:58.956250  589123 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:45:58.956298  589123 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:45:58.956350  589123 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:45:58.956399  589123 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:45:59.029638  589123 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:45:59.029754  589123 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:45:59.029851  589123 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:45:59.039109  589123 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:45:59.042552  589123 out.go:252]   - Generating certificates and keys ...
	I1213 11:45:59.042723  589123 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:45:59.042824  589123 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:45:59.042943  589123 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 11:45:59.043039  589123 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 11:45:59.043207  589123 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 11:45:59.043289  589123 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 11:45:59.043376  589123 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 11:45:59.043461  589123 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 11:45:59.043567  589123 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 11:45:59.043667  589123 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 11:45:59.043734  589123 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 11:45:59.043819  589123 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:45:59.264981  589123 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:45:59.845721  589123 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:46:00.029919  589123 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:46:00.271744  589123 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:46:00.538849  589123 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:46:00.539679  589123 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:46:00.542509  589123 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:46:00.546068  589123 out.go:252]   - Booting up control plane ...
	I1213 11:46:00.546182  589123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:46:00.546263  589123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:46:00.546330  589123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:46:00.568499  589123 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:46:00.568665  589123 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:46:00.575924  589123 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:46:00.576291  589123 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:46:00.576363  589123 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:46:00.707953  589123 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:46:00.708079  589123 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:50:00.707869  589123 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000374701s
	I1213 11:50:00.707898  589123 kubeadm.go:319] 
	I1213 11:50:00.707956  589123 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:50:00.707990  589123 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:50:00.708096  589123 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:50:00.708101  589123 kubeadm.go:319] 
	I1213 11:50:00.708207  589123 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:50:00.708239  589123 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:50:00.708270  589123 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:50:00.708274  589123 kubeadm.go:319] 
	I1213 11:50:00.719023  589123 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:50:00.719530  589123 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:50:00.719698  589123 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:50:00.720025  589123 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:50:00.720042  589123 kubeadm.go:319] 
	I1213 11:50:00.720173  589123 kubeadm.go:403] duration metric: took 8m6.761683072s to StartCluster
	I1213 11:50:00.720209  589123 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:50:00.720274  589123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:50:00.720362  589123 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 11:50:00.755118  589123 cri.go:89] found id: ""
	I1213 11:50:00.755161  589123 logs.go:282] 0 containers: []
	W1213 11:50:00.755171  589123 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:50:00.755178  589123 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:50:00.755246  589123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:50:00.781097  589123 cri.go:89] found id: ""
	I1213 11:50:00.781120  589123 logs.go:282] 0 containers: []
	W1213 11:50:00.781128  589123 logs.go:284] No container was found matching "etcd"
	I1213 11:50:00.781134  589123 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:50:00.781192  589123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:50:00.806528  589123 cri.go:89] found id: ""
	I1213 11:50:00.806552  589123 logs.go:282] 0 containers: []
	W1213 11:50:00.806559  589123 logs.go:284] No container was found matching "coredns"
	I1213 11:50:00.806566  589123 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:50:00.806623  589123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:50:00.836428  589123 cri.go:89] found id: ""
	I1213 11:50:00.836452  589123 logs.go:282] 0 containers: []
	W1213 11:50:00.836460  589123 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:50:00.836466  589123 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:50:00.836530  589123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:50:00.860830  589123 cri.go:89] found id: ""
	I1213 11:50:00.860898  589123 logs.go:282] 0 containers: []
	W1213 11:50:00.860915  589123 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:50:00.860922  589123 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:50:00.860991  589123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:50:00.886194  589123 cri.go:89] found id: ""
	I1213 11:50:00.886222  589123 logs.go:282] 0 containers: []
	W1213 11:50:00.886230  589123 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:50:00.886237  589123 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:50:00.886298  589123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:50:00.911416  589123 cri.go:89] found id: ""
	I1213 11:50:00.911442  589123 logs.go:282] 0 containers: []
	W1213 11:50:00.911451  589123 logs.go:284] No container was found matching "kindnet"
	I1213 11:50:00.911461  589123 logs.go:123] Gathering logs for dmesg ...
	I1213 11:50:00.911494  589123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:50:00.927545  589123 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:50:00.927575  589123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:50:00.994023  589123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:50:00.985916    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:00.986512    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:00.988048    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:00.988526    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:00.990075    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:50:00.985916    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:00.986512    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:00.988048    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:00.988526    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:00.990075    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:50:00.994047  589123 logs.go:123] Gathering logs for containerd ...
	I1213 11:50:00.994060  589123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:50:01.033895  589123 logs.go:123] Gathering logs for container status ...
	I1213 11:50:01.033932  589123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:50:01.062457  589123 logs.go:123] Gathering logs for kubelet ...
	I1213 11:50:01.062485  589123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 11:50:01.120952  589123 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000374701s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 11:50:01.121022  589123 out.go:285] * 
	* 
	W1213 11:50:01.121080  589123 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000374701s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000374701s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:50:01.121096  589123 out.go:285] * 
	* 
	W1213 11:50:01.123307  589123 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:50:01.129091  589123 out.go:203] 
	W1213 11:50:01.132826  589123 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000374701s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000374701s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:50:01.132880  589123 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 11:50:01.132907  589123 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 11:50:01.136752  589123 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p newest-cni-796924 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-796924
helpers_test.go:244: (dbg) docker inspect newest-cni-796924:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273",
	        "Created": "2025-12-13T11:41:45.560617227Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 589565,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:41:45.628321439Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273/hostname",
	        "HostsPath": "/var/lib/docker/containers/27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273/hosts",
	        "LogPath": "/var/lib/docker/containers/27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273/27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273-json.log",
	        "Name": "/newest-cni-796924",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-796924:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-796924",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273",
	                "LowerDir": "/var/lib/docker/overlay2/e3273dd39227f26c709bd7f69a32b4360acb8650246414f8098119d5f28e0f70-init/diff:/var/lib/docker/overlay2/5d0ca86320f2c06765995d644a73af30cd6ddb36f5c1a2a6b1ebb24af53cdabd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e3273dd39227f26c709bd7f69a32b4360acb8650246414f8098119d5f28e0f70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e3273dd39227f26c709bd7f69a32b4360acb8650246414f8098119d5f28e0f70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e3273dd39227f26c709bd7f69a32b4360acb8650246414f8098119d5f28e0f70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-796924",
	                "Source": "/var/lib/docker/volumes/newest-cni-796924/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-796924",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-796924",
	                "name.minikube.sigs.k8s.io": "newest-cni-796924",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "92d11ff764680cdd62555d8da891c50ecfe321b3d8620a2e9bb3f0c5bfca4c60",
	            "SandboxKey": "/var/run/docker/netns/92d11ff76468",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-796924": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:c8:11:0f:14:22",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "524b54a7afb58fdfadc2532a94da198ca12aafc23248ec4905999b39dfe064e0",
	                    "EndpointID": "99474f614f6ae76108238f2f77b9e4272618bc5ea1a8c7ccb8cffa8255291355",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-796924",
	                        "27aba94e8ede"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-796924 -n newest-cni-796924
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-796924 -n newest-cni-796924: exit status 6 (350.979669ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 11:50:01.564041  601469 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-796924" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-796924 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ addons  │ enable metrics-server -p embed-certs-951675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:38 UTC │
	│ stop    │ -p embed-certs-951675 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:38 UTC │
	│ addons  │ enable dashboard -p embed-certs-951675 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:38 UTC │
	│ start   │ -p embed-certs-951675 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:39 UTC │
	│ image   │ embed-certs-951675 image list --format=json                                                                                                                                                                                                                │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ pause   │ -p embed-certs-951675 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ unpause │ -p embed-certs-951675 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p embed-certs-951675                                                                                                                                                                                                                                      │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p embed-certs-951675                                                                                                                                                                                                                                      │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p disable-driver-mounts-823668                                                                                                                                                                                                                            │ disable-driver-mounts-823668 │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ start   │ -p default-k8s-diff-port-191845 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:40 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-191845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ stop    │ -p default-k8s-diff-port-191845 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-191845 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ start   │ -p default-k8s-diff-port-191845 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:41 UTC │
	│ image   │ default-k8s-diff-port-191845 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ pause   │ -p default-k8s-diff-port-191845 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ unpause │ -p default-k8s-diff-port-191845 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ delete  │ -p default-k8s-diff-port-191845                                                                                                                                                                                                                            │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ delete  │ -p default-k8s-diff-port-191845                                                                                                                                                                                                                            │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ start   │ -p newest-cni-796924 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-333352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ stop    │ -p no-preload-333352 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ addons  │ enable dashboard -p no-preload-333352 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ start   │ -p no-preload-333352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:46:47
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:46:47.931970  596998 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:46:47.932200  596998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:46:47.932224  596998 out.go:374] Setting ErrFile to fd 2...
	I1213 11:46:47.932243  596998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:46:47.932512  596998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 11:46:47.932893  596998 out.go:368] Setting JSON to false
	I1213 11:46:47.933847  596998 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":16161,"bootTime":1765610247,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 11:46:47.933941  596998 start.go:143] virtualization:  
	I1213 11:46:47.936853  596998 out.go:179] * [no-preload-333352] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:46:47.940791  596998 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:46:47.940972  596998 notify.go:221] Checking for updates...
	I1213 11:46:47.944715  596998 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:46:47.948724  596998 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:46:47.952032  596998 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 11:46:47.955654  596998 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:46:47.958860  596998 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:46:47.962467  596998 config.go:182] Loaded profile config "no-preload-333352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:46:47.963200  596998 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:46:47.998152  596998 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:46:47.998290  596998 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:46:48.062434  596998 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:46:48.052493365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:46:48.062546  596998 docker.go:319] overlay module found
	I1213 11:46:48.065709  596998 out.go:179] * Using the docker driver based on existing profile
	I1213 11:46:48.068574  596998 start.go:309] selected driver: docker
	I1213 11:46:48.068598  596998 start.go:927] validating driver "docker" against &{Name:no-preload-333352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-333352 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:46:48.068700  596998 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:46:48.069441  596998 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:46:48.125553  596998 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:46:48.115368398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:46:48.125930  596998 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:46:48.125957  596998 cni.go:84] Creating CNI manager for ""
	I1213 11:46:48.126004  596998 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:46:48.126038  596998 start.go:353] cluster config:
	{Name:no-preload-333352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-333352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:46:48.129462  596998 out.go:179] * Starting "no-preload-333352" primary control-plane node in "no-preload-333352" cluster
	I1213 11:46:48.132280  596998 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 11:46:48.135249  596998 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:46:48.138117  596998 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:46:48.138171  596998 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:46:48.138307  596998 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/config.json ...
	I1213 11:46:48.138613  596998 cache.go:107] acquiring lock: {Name:mk31a59cdc41332147a99da115e762325d4c0338 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.138751  596998 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1213 11:46:48.138763  596998 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 161.618µs
	I1213 11:46:48.138777  596998 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1213 11:46:48.138765  596998 cache.go:107] acquiring lock: {Name:mk2ae32cc20ed4877d34af62f362936effddd88e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.138790  596998 cache.go:107] acquiring lock: {Name:mkc81502ef492ecd96689a43cd1ba75bb4269f1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.138812  596998 cache.go:107] acquiring lock: {Name:mk8c5f5248a840d1f1002cf2ef82275f7d10aa22 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.138842  596998 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1213 11:46:48.138848  596998 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 60.267µs
	I1213 11:46:48.138854  596998 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1213 11:46:48.138851  596998 cache.go:107] acquiring lock: {Name:mk35ccdf3fe56b66e694c71ff2d919f143d8dacc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.138862  596998 cache.go:107] acquiring lock: {Name:mk23fe723c287cca56429f89071149f1d96bb4dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.138892  596998 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1213 11:46:48.138901  596998 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 51.398µs
	I1213 11:46:48.138905  596998 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1213 11:46:48.138908  596998 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1213 11:46:48.138894  596998 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1213 11:46:48.138912  596998 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 101.006µs
	I1213 11:46:48.138918  596998 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1213 11:46:48.138918  596998 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 56.075µs
	I1213 11:46:48.138924  596998 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1213 11:46:48.138940  596998 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1213 11:46:48.138947  596998 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 191.059µs
	I1213 11:46:48.138952  596998 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1213 11:46:48.138934  596998 cache.go:107] acquiring lock: {Name:mkc6bf22ce18468a92a774694a4b49cbc277f1ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.138948  596998 cache.go:107] acquiring lock: {Name:mk26d49691f1ca365a0728b2ae008656f80369ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.138975  596998 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1213 11:46:48.138980  596998 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 47.803µs
	I1213 11:46:48.138985  596998 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1213 11:46:48.138986  596998 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1213 11:46:48.138992  596998 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 44.924µs
	I1213 11:46:48.138999  596998 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1213 11:46:48.139013  596998 cache.go:87] Successfully saved all images to host disk.
	I1213 11:46:48.157619  596998 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:46:48.157642  596998 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:46:48.157658  596998 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:46:48.157688  596998 start.go:360] acquireMachinesLock for no-preload-333352: {Name:mkcf6f110441e125d79b38a8f8cc1a9606a821b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.157750  596998 start.go:364] duration metric: took 36.333µs to acquireMachinesLock for "no-preload-333352"
	I1213 11:46:48.157773  596998 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:46:48.157778  596998 fix.go:54] fixHost starting: 
	I1213 11:46:48.158031  596998 cli_runner.go:164] Run: docker container inspect no-preload-333352 --format={{.State.Status}}
	I1213 11:46:48.175033  596998 fix.go:112] recreateIfNeeded on no-preload-333352: state=Stopped err=<nil>
	W1213 11:46:48.175073  596998 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:46:48.180317  596998 out.go:252] * Restarting existing docker container for "no-preload-333352" ...
	I1213 11:46:48.180439  596998 cli_runner.go:164] Run: docker start no-preload-333352
	I1213 11:46:48.429680  596998 cli_runner.go:164] Run: docker container inspect no-preload-333352 --format={{.State.Status}}
	I1213 11:46:48.453033  596998 kic.go:430] container "no-preload-333352" state is running.
	I1213 11:46:48.453454  596998 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-333352
	I1213 11:46:48.479808  596998 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/config.json ...
	I1213 11:46:48.480042  596998 machine.go:94] provisionDockerMachine start ...
	I1213 11:46:48.480102  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:48.503420  596998 main.go:143] libmachine: Using SSH client type: native
	I1213 11:46:48.503750  596998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1213 11:46:48.503759  596998 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:46:48.504579  596998 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 11:46:51.658471  596998 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-333352
	
	I1213 11:46:51.658499  596998 ubuntu.go:182] provisioning hostname "no-preload-333352"
	I1213 11:46:51.658568  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:51.680359  596998 main.go:143] libmachine: Using SSH client type: native
	I1213 11:46:51.680665  596998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1213 11:46:51.680681  596998 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-333352 && echo "no-preload-333352" | sudo tee /etc/hostname
	I1213 11:46:51.840345  596998 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-333352
	
	I1213 11:46:51.840432  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:51.858862  596998 main.go:143] libmachine: Using SSH client type: native
	I1213 11:46:51.859190  596998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1213 11:46:51.859212  596998 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-333352' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-333352/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-333352' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:46:52.011439  596998 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:46:52.011473  596998 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 11:46:52.011517  596998 ubuntu.go:190] setting up certificates
	I1213 11:46:52.011538  596998 provision.go:84] configureAuth start
	I1213 11:46:52.011606  596998 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-333352
	I1213 11:46:52.030543  596998 provision.go:143] copyHostCerts
	I1213 11:46:52.030630  596998 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 11:46:52.030645  596998 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 11:46:52.030900  596998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 11:46:52.031021  596998 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 11:46:52.031034  596998 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 11:46:52.031064  596998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 11:46:52.031134  596998 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 11:46:52.031144  596998 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 11:46:52.031169  596998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 11:46:52.031226  596998 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.no-preload-333352 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-333352]
	I1213 11:46:52.199052  596998 provision.go:177] copyRemoteCerts
	I1213 11:46:52.199122  596998 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:46:52.199163  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:52.218347  596998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:46:52.322404  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:46:52.340755  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 11:46:52.358393  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:46:52.375864  596998 provision.go:87] duration metric: took 364.299362ms to configureAuth
	I1213 11:46:52.375890  596998 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:46:52.376105  596998 config.go:182] Loaded profile config "no-preload-333352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:46:52.376112  596998 machine.go:97] duration metric: took 3.896062654s to provisionDockerMachine
	I1213 11:46:52.376121  596998 start.go:293] postStartSetup for "no-preload-333352" (driver="docker")
	I1213 11:46:52.376132  596998 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:46:52.376180  596998 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:46:52.376225  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:52.393759  596998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:46:52.503058  596998 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:46:52.506632  596998 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:46:52.506662  596998 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:46:52.506674  596998 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 11:46:52.506753  596998 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 11:46:52.506839  596998 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 11:46:52.506949  596998 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:46:52.514878  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:46:52.533605  596998 start.go:296] duration metric: took 157.452449ms for postStartSetup
	I1213 11:46:52.533696  596998 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:46:52.533746  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:52.551775  596998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:46:52.655971  596998 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:46:52.661022  596998 fix.go:56] duration metric: took 4.503236152s for fixHost
	I1213 11:46:52.661051  596998 start.go:83] releasing machines lock for "no-preload-333352", held for 4.503288469s
	I1213 11:46:52.661123  596998 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-333352
	I1213 11:46:52.678128  596998 ssh_runner.go:195] Run: cat /version.json
	I1213 11:46:52.678192  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:52.678486  596998 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:46:52.678544  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:52.698809  596998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:46:52.701663  596998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:46:52.895685  596998 ssh_runner.go:195] Run: systemctl --version
	I1213 11:46:52.902479  596998 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:46:52.907001  596998 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:46:52.907123  596998 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:46:52.915282  596998 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 11:46:52.915312  596998 start.go:496] detecting cgroup driver to use...
	I1213 11:46:52.915343  596998 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:46:52.915421  596998 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:46:52.933908  596998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:46:52.947931  596998 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:46:52.947999  596998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:46:52.963993  596998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:46:52.977424  596998 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:46:53.103160  596998 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:46:53.238170  596998 docker.go:234] disabling docker service ...
	I1213 11:46:53.238265  596998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:46:53.257118  596998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:46:53.272790  596998 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:46:53.410295  596998 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:46:53.530871  596998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:46:53.544130  596998 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:46:53.559695  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 11:46:53.568863  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:46:53.578325  596998 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:46:53.578399  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:46:53.588010  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:46:53.597447  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:46:53.606673  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:46:53.616093  596998 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:46:53.624546  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:46:53.633591  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:46:53.642957  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:46:53.652128  596998 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:46:53.659821  596998 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:46:53.667713  596998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:46:53.790713  596998 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:46:53.892894  596998 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 11:46:53.893007  596998 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 11:46:53.896921  596998 start.go:564] Will wait 60s for crictl version
	I1213 11:46:53.897007  596998 ssh_runner.go:195] Run: which crictl
	I1213 11:46:53.900594  596998 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:46:53.944666  596998 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 11:46:53.944790  596998 ssh_runner.go:195] Run: containerd --version
	I1213 11:46:53.967810  596998 ssh_runner.go:195] Run: containerd --version
	I1213 11:46:53.993455  596998 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 11:46:53.996395  596998 cli_runner.go:164] Run: docker network inspect no-preload-333352 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:46:54.023910  596998 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 11:46:54.028455  596998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:46:54.039026  596998 kubeadm.go:884] updating cluster {Name:no-preload-333352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-333352 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:46:54.039148  596998 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:46:54.039201  596998 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:46:54.065782  596998 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 11:46:54.065805  596998 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:46:54.065813  596998 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 11:46:54.065928  596998 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-333352 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-333352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:46:54.066000  596998 ssh_runner.go:195] Run: sudo crictl info
	I1213 11:46:54.093275  596998 cni.go:84] Creating CNI manager for ""
	I1213 11:46:54.093302  596998 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:46:54.093325  596998 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 11:46:54.093349  596998 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-333352 NodeName:no-preload-333352 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:46:54.093537  596998 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-333352"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:46:54.093645  596998 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:46:54.101713  596998 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:46:54.101784  596998 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:46:54.109422  596998 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 11:46:54.122555  596998 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:46:54.135656  596998 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 11:46:54.148334  596998 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:46:54.151958  596998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:46:54.162210  596998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:46:54.287595  596998 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:46:54.305395  596998 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352 for IP: 192.168.85.2
	I1213 11:46:54.305417  596998 certs.go:195] generating shared ca certs ...
	I1213 11:46:54.305434  596998 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:46:54.305583  596998 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 11:46:54.305641  596998 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 11:46:54.305653  596998 certs.go:257] generating profile certs ...
	I1213 11:46:54.305755  596998 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/client.key
	I1213 11:46:54.305817  596998 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/apiserver.key.cd574fc3
	I1213 11:46:54.305860  596998 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/proxy-client.key
	I1213 11:46:54.305974  596998 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 11:46:54.306019  596998 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 11:46:54.306031  596998 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:46:54.306061  596998 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:46:54.306090  596998 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:46:54.306117  596998 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 11:46:54.306193  596998 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:46:54.306893  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:46:54.331092  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:46:54.350803  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:46:54.368808  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:46:54.387679  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 11:46:54.404957  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:46:54.422566  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:46:54.440705  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:46:54.458444  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 11:46:54.476249  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 11:46:54.494025  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:46:54.512671  596998 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:46:54.526330  596998 ssh_runner.go:195] Run: openssl version
	I1213 11:46:54.532951  596998 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 11:46:54.540955  596998 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 11:46:54.548967  596998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 11:46:54.552993  596998 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 11:46:54.553060  596998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 11:46:54.596516  596998 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:46:54.604001  596998 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:46:54.611355  596998 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:46:54.618889  596998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:46:54.622912  596998 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:46:54.623031  596998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:46:54.665964  596998 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:46:54.674514  596998 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 11:46:54.683052  596998 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 11:46:54.691830  596998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 11:46:54.696558  596998 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 11:46:54.696685  596998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 11:46:54.739286  596998 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:46:54.747030  596998 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:46:54.751301  596998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:46:54.792521  596998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:46:54.848244  596998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:46:54.897199  596998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:46:54.938465  596998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:46:54.979853  596998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:46:55.021716  596998 kubeadm.go:401] StartCluster: {Name:no-preload-333352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-333352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:46:55.021819  596998 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 11:46:55.021905  596998 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:46:55.050987  596998 cri.go:89] found id: ""
	I1213 11:46:55.051064  596998 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:46:55.059300  596998 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 11:46:55.059321  596998 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 11:46:55.059393  596998 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 11:46:55.066981  596998 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:46:55.067384  596998 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-333352" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:46:55.067494  596998 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-307042/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-333352" cluster setting kubeconfig missing "no-preload-333352" context setting]
	I1213 11:46:55.067794  596998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:46:55.069069  596998 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 11:46:55.083063  596998 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 11:46:55.083096  596998 kubeadm.go:602] duration metric: took 23.769764ms to restartPrimaryControlPlane
	I1213 11:46:55.083110  596998 kubeadm.go:403] duration metric: took 61.40393ms to StartCluster
	I1213 11:46:55.083126  596998 settings.go:142] acquiring lock: {Name:mk079e9a25ebbc2c8fbae42d4c6ed096a652c00b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:46:55.083190  596998 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:46:55.083859  596998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:46:55.084085  596998 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 11:46:55.084484  596998 config.go:182] Loaded profile config "no-preload-333352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:46:55.084498  596998 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 11:46:55.084648  596998 addons.go:70] Setting storage-provisioner=true in profile "no-preload-333352"
	I1213 11:46:55.084663  596998 addons.go:239] Setting addon storage-provisioner=true in "no-preload-333352"
	I1213 11:46:55.084673  596998 addons.go:70] Setting dashboard=true in profile "no-preload-333352"
	I1213 11:46:55.084687  596998 addons.go:239] Setting addon dashboard=true in "no-preload-333352"
	W1213 11:46:55.084692  596998 addons.go:248] addon dashboard should already be in state true
	I1213 11:46:55.084699  596998 addons.go:70] Setting default-storageclass=true in profile "no-preload-333352"
	I1213 11:46:55.084713  596998 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-333352"
	I1213 11:46:55.084715  596998 host.go:66] Checking if "no-preload-333352" exists ...
	I1213 11:46:55.085024  596998 cli_runner.go:164] Run: docker container inspect no-preload-333352 --format={{.State.Status}}
	I1213 11:46:55.085259  596998 cli_runner.go:164] Run: docker container inspect no-preload-333352 --format={{.State.Status}}
	I1213 11:46:55.084693  596998 host.go:66] Checking if "no-preload-333352" exists ...
	I1213 11:46:55.086123  596998 cli_runner.go:164] Run: docker container inspect no-preload-333352 --format={{.State.Status}}
	I1213 11:46:55.089958  596998 out.go:179] * Verifying Kubernetes components...
	I1213 11:46:55.092885  596998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:46:55.118731  596998 addons.go:239] Setting addon default-storageclass=true in "no-preload-333352"
	I1213 11:46:55.118772  596998 host.go:66] Checking if "no-preload-333352" exists ...
	I1213 11:46:55.119210  596998 cli_runner.go:164] Run: docker container inspect no-preload-333352 --format={{.State.Status}}
	I1213 11:46:55.142748  596998 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:46:55.148113  596998 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 11:46:55.148237  596998 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:46:55.148248  596998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 11:46:55.148312  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:55.153556  596998 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 11:46:55.156401  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 11:46:55.156436  596998 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 11:46:55.156518  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:55.170900  596998 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 11:46:55.170922  596998 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 11:46:55.170990  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:55.202915  596998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:46:55.221059  596998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:46:55.234212  596998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:46:55.322944  596998 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:46:55.404599  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 11:46:55.404621  596998 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 11:46:55.410339  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:46:55.424868  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 11:46:55.424934  596998 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 11:46:55.437611  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:46:55.466532  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 11:46:55.466598  596998 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 11:46:55.533480  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 11:46:55.533543  596998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 11:46:55.558338  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 11:46:55.558404  596998 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 11:46:55.573707  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 11:46:55.573775  596998 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 11:46:55.586950  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 11:46:55.587019  596998 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 11:46:55.599876  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 11:46:55.599941  596998 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 11:46:55.613189  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:46:55.613214  596998 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 11:46:55.626391  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:46:56.170314  596998 node_ready.go:35] waiting up to 6m0s for node "no-preload-333352" to be "Ready" ...
	W1213 11:46:56.170674  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.170727  596998 retry.go:31] will retry after 321.378191ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:46:56.170779  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.170796  596998 retry.go:31] will retry after 211.981666ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:46:56.170985  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.171002  596998 retry.go:31] will retry after 239.070892ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.383589  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:46:56.411068  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:46:56.469548  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.469643  596998 retry.go:31] will retry after 394.603627ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:46:56.477518  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.477558  596998 retry.go:31] will retry after 498.653036ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.492479  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:46:56.550411  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.550465  596998 retry.go:31] will retry after 487.503108ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.865341  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:46:56.967936  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.968025  596998 retry.go:31] will retry after 717.718245ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.977052  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:46:57.038612  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:46:57.046035  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:57.046077  596998 retry.go:31] will retry after 431.172191ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:46:57.103477  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:57.103510  596998 retry.go:31] will retry after 495.110582ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:57.477604  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:46:57.542568  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:57.542608  596998 retry.go:31] will retry after 1.264774015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:57.599678  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:46:57.658440  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:57.658483  596998 retry.go:31] will retry after 976.781113ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:57.686351  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:46:57.782906  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:57.782941  596998 retry.go:31] will retry after 1.210299273s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:46:58.170918  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:46:58.635525  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:46:58.695473  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:58.695513  596998 retry.go:31] will retry after 770.527982ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:58.807674  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:46:58.874925  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:58.874962  596998 retry.go:31] will retry after 1.331403387s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:58.994063  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:46:59.058328  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:59.058362  596998 retry.go:31] will retry after 1.540138362s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:59.466331  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:46:59.526972  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:59.527006  596998 retry.go:31] will retry after 1.010658159s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:00.171512  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:00.206721  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:47:00.355103  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:00.355138  596998 retry.go:31] will retry after 2.476956922s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:00.538651  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:47:00.599510  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:47:00.607813  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:00.607842  596998 retry.go:31] will retry after 2.846567669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:00.671803  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:00.671834  596998 retry.go:31] will retry after 1.147758556s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:01.820380  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:47:01.879212  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:01.879244  596998 retry.go:31] will retry after 3.144985192s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:02.670957  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:02.832252  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:47:02.902734  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:02.902771  596998 retry.go:31] will retry after 3.378828885s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:03.455263  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:47:03.521452  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:03.521486  596998 retry.go:31] will retry after 3.23032482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:05.024515  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:47:05.083539  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:05.083572  596998 retry.go:31] will retry after 3.91018085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:05.171119  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:06.282348  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:47:06.342380  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:06.342417  596998 retry.go:31] will retry after 4.569051902s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:06.752192  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:47:06.812324  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:06.812362  596998 retry.go:31] will retry after 3.621339093s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:07.171170  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:08.994724  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:47:09.059715  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:09.059750  596998 retry.go:31] will retry after 3.336187079s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:09.171521  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:10.434821  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:47:10.527681  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:10.527715  596998 retry.go:31] will retry after 8.747216293s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:10.911760  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:47:10.973491  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:10.973530  596998 retry.go:31] will retry after 6.563764078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:11.671509  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:12.396136  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:47:12.451525  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:12.451555  596998 retry.go:31] will retry after 12.979902201s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:13.671774  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:16.171040  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:17.537629  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:47:17.605650  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:17.605686  596998 retry.go:31] will retry after 13.028008559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:18.171361  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:19.275997  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:47:19.342259  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:19.342290  596998 retry.go:31] will retry after 20.165472284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:20.671224  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:23.171107  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:25.171144  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:25.431592  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:47:25.517211  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:25.517246  596998 retry.go:31] will retry after 17.190857405s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:27.671038  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:29.671905  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:30.634538  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:47:30.747730  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:30.747766  596998 retry.go:31] will retry after 8.253172442s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:32.170901  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:34.170950  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:36.171702  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:38.671029  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:39.001281  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:47:39.065716  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:39.065747  596998 retry.go:31] will retry after 30.140073357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:39.508018  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:47:39.565709  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:39.565750  596998 retry.go:31] will retry after 13.258391709s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:41.170971  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:42.708360  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:47:42.777228  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:42.777262  596998 retry.go:31] will retry after 14.462485223s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:43.171411  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:45.171885  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:47.671008  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:50.170919  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:52.171024  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:52.825279  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:47:52.895300  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:52.895334  596998 retry.go:31] will retry after 42.53439734s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:54.171468  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:56.671010  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:57.240410  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:47:57.300003  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:57.300038  596998 retry.go:31] will retry after 43.551114065s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:58.671871  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:01.171150  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:03.671009  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:06.171060  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:08.670995  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:48:09.206164  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:48:09.266520  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:48:09.266558  596998 retry.go:31] will retry after 38.20317151s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:48:10.671430  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:12.671901  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:15.171124  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:17.671141  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:19.671553  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:22.170909  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:24.170990  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:26.171623  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:28.671096  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:31.171091  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:33.670895  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:48:35.430795  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:48:35.490443  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:48:35.490550  596998 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 11:48:35.670951  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:37.671093  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:40.171057  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:48:40.852278  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:48:40.916394  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:48:40.916509  596998 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 11:48:42.171144  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:44.171515  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:46.671821  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:48:47.470510  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:48:47.538580  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:48:47.538682  596998 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 11:48:47.541933  596998 out.go:179] * Enabled addons: 
	I1213 11:48:47.544738  596998 addons.go:530] duration metric: took 1m52.460244741s for enable addons: enabled=[]
	W1213 11:48:49.170971  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:51.171371  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:53.670885  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:55.671127  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:58.171050  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:00.171123  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:02.171184  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:04.670961  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:06.671604  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:09.171017  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:11.671001  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:13.671410  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:15.671910  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:18.171029  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:20.670977  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:23.170985  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:25.171248  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:27.670921  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:30.171027  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:32.171089  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:34.671060  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:37.170891  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:39.171056  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:41.670906  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:44.170836  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:46.171894  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:48.671002  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:51.170981  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:53.671005  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:55.671144  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:50:00.707869  589123 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000374701s
	I1213 11:50:00.707898  589123 kubeadm.go:319] 
	I1213 11:50:00.707956  589123 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:50:00.707990  589123 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:50:00.708096  589123 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:50:00.708101  589123 kubeadm.go:319] 
	I1213 11:50:00.708207  589123 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:50:00.708239  589123 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:50:00.708270  589123 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:50:00.708274  589123 kubeadm.go:319] 
	I1213 11:50:00.719023  589123 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:50:00.719530  589123 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:50:00.719698  589123 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:50:00.720025  589123 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:50:00.720042  589123 kubeadm.go:319] 
	I1213 11:50:00.720173  589123 kubeadm.go:403] duration metric: took 8m6.761683072s to StartCluster
	I1213 11:50:00.720209  589123 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:50:00.720274  589123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:50:00.720362  589123 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 11:50:00.755118  589123 cri.go:89] found id: ""
	I1213 11:50:00.755161  589123 logs.go:282] 0 containers: []
	W1213 11:50:00.755171  589123 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:50:00.755178  589123 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:50:00.755246  589123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:50:00.781097  589123 cri.go:89] found id: ""
	I1213 11:50:00.781120  589123 logs.go:282] 0 containers: []
	W1213 11:50:00.781128  589123 logs.go:284] No container was found matching "etcd"
	I1213 11:50:00.781134  589123 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:50:00.781192  589123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:50:00.806528  589123 cri.go:89] found id: ""
	I1213 11:50:00.806552  589123 logs.go:282] 0 containers: []
	W1213 11:50:00.806559  589123 logs.go:284] No container was found matching "coredns"
	I1213 11:50:00.806566  589123 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:50:00.806623  589123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:50:00.836428  589123 cri.go:89] found id: ""
	I1213 11:50:00.836452  589123 logs.go:282] 0 containers: []
	W1213 11:50:00.836460  589123 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:50:00.836466  589123 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:50:00.836530  589123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:50:00.860830  589123 cri.go:89] found id: ""
	I1213 11:50:00.860898  589123 logs.go:282] 0 containers: []
	W1213 11:50:00.860915  589123 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:50:00.860922  589123 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:50:00.860991  589123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:50:00.886194  589123 cri.go:89] found id: ""
	I1213 11:50:00.886222  589123 logs.go:282] 0 containers: []
	W1213 11:50:00.886230  589123 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:50:00.886237  589123 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:50:00.886298  589123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:50:00.911416  589123 cri.go:89] found id: ""
	I1213 11:50:00.911442  589123 logs.go:282] 0 containers: []
	W1213 11:50:00.911451  589123 logs.go:284] No container was found matching "kindnet"
	I1213 11:50:00.911461  589123 logs.go:123] Gathering logs for dmesg ...
	I1213 11:50:00.911494  589123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:50:00.927545  589123 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:50:00.927575  589123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:50:00.994023  589123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:50:00.985916    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:00.986512    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:00.988048    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:00.988526    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:00.990075    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:50:00.985916    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:00.986512    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:00.988048    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:00.988526    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:00.990075    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:50:00.994047  589123 logs.go:123] Gathering logs for containerd ...
	I1213 11:50:00.994060  589123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:50:01.033895  589123 logs.go:123] Gathering logs for container status ...
	I1213 11:50:01.033932  589123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:50:01.062457  589123 logs.go:123] Gathering logs for kubelet ...
	I1213 11:50:01.062485  589123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 11:50:01.120952  589123 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000374701s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 11:50:01.121022  589123 out.go:285] * 
	W1213 11:50:01.121080  589123 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000374701s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:50:01.121096  589123 out.go:285] * 
	W1213 11:50:01.123307  589123 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:50:01.129091  589123 out.go:203] 
	W1213 11:50:01.132826  589123 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000374701s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:50:01.132880  589123 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 11:50:01.132907  589123 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 11:50:01.136752  589123 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.265909379Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.265936489Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.265986500Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.266013774Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.266024572Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.266037405Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.266046923Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.266058050Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.266074567Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.266104909Z" level=info msg="Connect containerd service"
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.266428859Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.267145379Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.278775042Z" level=info msg="Start subscribing containerd event"
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.278930251Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.279075164Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.279020098Z" level=info msg="Start recovering state"
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.316986614Z" level=info msg="Start event monitor"
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.317036821Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.317046774Z" level=info msg="Start streaming server"
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.317056440Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.317064596Z" level=info msg="runtime interface starting up..."
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.317071119Z" level=info msg="starting plugins..."
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.317082492Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 11:41:52 newest-cni-796924 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.319226019Z" level=info msg="containerd successfully booted in 0.078209s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:50:02.276737    4970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:02.277362    4970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:02.279351    4970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:02.279923    4970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:02.281585    4970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +12.907693] overlayfs: idmapped layers are currently not supported
	[Dec13 09:25] overlayfs: idmapped layers are currently not supported
	[ +26.192425] overlayfs: idmapped layers are currently not supported
	[Dec13 09:26] overlayfs: idmapped layers are currently not supported
	[ +25.729788] overlayfs: idmapped layers are currently not supported
	[Dec13 09:27] overlayfs: idmapped layers are currently not supported
	[Dec13 09:28] overlayfs: idmapped layers are currently not supported
	[Dec13 09:31] overlayfs: idmapped layers are currently not supported
	[Dec13 09:32] overlayfs: idmapped layers are currently not supported
	[ +17.979093] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec13 09:43] overlayfs: idmapped layers are currently not supported
	[Dec13 09:45] overlayfs: idmapped layers are currently not supported
	[ +25.885102] overlayfs: idmapped layers are currently not supported
	[Dec13 09:46] overlayfs: idmapped layers are currently not supported
	[ +22.078149] overlayfs: idmapped layers are currently not supported
	[Dec13 09:47] overlayfs: idmapped layers are currently not supported
	[Dec13 09:48] overlayfs: idmapped layers are currently not supported
	[Dec13 09:49] overlayfs: idmapped layers are currently not supported
	[Dec13 09:51] overlayfs: idmapped layers are currently not supported
	[ +17.043564] overlayfs: idmapped layers are currently not supported
	[Dec13 09:52] overlayfs: idmapped layers are currently not supported
	[Dec13 09:53] overlayfs: idmapped layers are currently not supported
	[Dec13 10:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:19] hrtimer: interrupt took 21247146 ns
	
	
	==> kernel <==
	 11:50:02 up  4:32,  0 user,  load average: 1.07, 0.93, 1.53
	Linux newest-cni-796924 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 11:49:59 newest-cni-796924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:49:59 newest-cni-796924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 13 11:49:59 newest-cni-796924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:49:59 newest-cni-796924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:49:59 newest-cni-796924 kubelet[4773]: E1213 11:49:59.968968    4773 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:49:59 newest-cni-796924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:49:59 newest-cni-796924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:50:00 newest-cni-796924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 13 11:50:00 newest-cni-796924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:50:00 newest-cni-796924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:50:00 newest-cni-796924 kubelet[4779]: E1213 11:50:00.739161    4779 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:50:00 newest-cni-796924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:50:00 newest-cni-796924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:50:01 newest-cni-796924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 13 11:50:01 newest-cni-796924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:50:01 newest-cni-796924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:50:01 newest-cni-796924 kubelet[4872]: E1213 11:50:01.488977    4872 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:50:01 newest-cni-796924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:50:01 newest-cni-796924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:50:02 newest-cni-796924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 13 11:50:02 newest-cni-796924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:50:02 newest-cni-796924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:50:02 newest-cni-796924 kubelet[4963]: E1213 11:50:02.257594    4963 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:50:02 newest-cni-796924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:50:02 newest-cni-796924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-796924 -n newest-cni-796924
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-796924 -n newest-cni-796924: exit status 6 (351.69444ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 11:50:02.829484  601695 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-796924" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-796924" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (502.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-333352 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-333352 create -f testdata/busybox.yaml: exit status 1 (87.730886ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-333352" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-333352 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-333352
helpers_test.go:244: (dbg) docker inspect no-preload-333352:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db",
	        "Created": "2025-12-13T11:36:44.52795509Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 568910,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:36:44.610473104Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/hostname",
	        "HostsPath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/hosts",
	        "LogPath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db-json.log",
	        "Name": "/no-preload-333352",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-333352:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-333352",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db",
	                "LowerDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891-init/diff:/var/lib/docker/overlay2/5d0ca86320f2c06765995d644a73af30cd6ddb36f5c1a2a6b1ebb24af53cdabd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-333352",
	                "Source": "/var/lib/docker/volumes/no-preload-333352/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-333352",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-333352",
	                "name.minikube.sigs.k8s.io": "no-preload-333352",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "661b691bd6512fd4efbd202820a9bae1c5beb21cce06578707e71b64c02a0d52",
	            "SandboxKey": "/var/run/docker/netns/661b691bd651",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33406"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33409"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33407"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33408"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-333352": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:23:72:9e:c3:20",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ee20fc50f482b31273047147a2f419c36704bb98933537d0ac5901a560402043",
	                    "EndpointID": "eaefa46f6237ec9d0c60ef1c735019996dda65756a613e136b17ca120c60027b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-333352",
	                        "ca124efb8aeb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-333352 -n no-preload-333352
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-333352 -n no-preload-333352: exit status 6 (325.316972ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 11:45:18.352487  594362 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-333352" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-333352 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p no-preload-333352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:36 UTC │                     │
	│ start   │ -p cert-expiration-086397 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                            │ cert-expiration-086397       │ jenkins │ v1.37.0 │ 13 Dec 25 11:36 UTC │ 13 Dec 25 11:36 UTC │
	│ delete  │ -p cert-expiration-086397                                                                                                                                                                                                                                  │ cert-expiration-086397       │ jenkins │ v1.37.0 │ 13 Dec 25 11:36 UTC │ 13 Dec 25 11:36 UTC │
	│ start   │ -p embed-certs-951675 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:36 UTC │ 13 Dec 25 11:37 UTC │
	│ addons  │ enable metrics-server -p embed-certs-951675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:38 UTC │
	│ stop    │ -p embed-certs-951675 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:38 UTC │
	│ addons  │ enable dashboard -p embed-certs-951675 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:38 UTC │
	│ start   │ -p embed-certs-951675 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:39 UTC │
	│ image   │ embed-certs-951675 image list --format=json                                                                                                                                                                                                                │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ pause   │ -p embed-certs-951675 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ unpause │ -p embed-certs-951675 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p embed-certs-951675                                                                                                                                                                                                                                      │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p embed-certs-951675                                                                                                                                                                                                                                      │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p disable-driver-mounts-823668                                                                                                                                                                                                                            │ disable-driver-mounts-823668 │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ start   │ -p default-k8s-diff-port-191845 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:40 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-191845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ stop    │ -p default-k8s-diff-port-191845 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-191845 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ start   │ -p default-k8s-diff-port-191845 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:41 UTC │
	│ image   │ default-k8s-diff-port-191845 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ pause   │ -p default-k8s-diff-port-191845 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ unpause │ -p default-k8s-diff-port-191845 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ delete  │ -p default-k8s-diff-port-191845                                                                                                                                                                                                                            │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ delete  │ -p default-k8s-diff-port-191845                                                                                                                                                                                                                            │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ start   │ -p newest-cni-796924 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:41:40
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:41:40.611522  589123 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:41:40.611651  589123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:41:40.611663  589123 out.go:374] Setting ErrFile to fd 2...
	I1213 11:41:40.611668  589123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:41:40.611912  589123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 11:41:40.612341  589123 out.go:368] Setting JSON to false
	I1213 11:41:40.613214  589123 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":15853,"bootTime":1765610247,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 11:41:40.613281  589123 start.go:143] virtualization:  
	I1213 11:41:40.617550  589123 out.go:179] * [newest-cni-796924] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:41:40.621011  589123 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:41:40.621109  589123 notify.go:221] Checking for updates...
	I1213 11:41:40.627428  589123 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:41:40.630647  589123 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:41:40.633653  589123 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 11:41:40.636743  589123 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:41:40.639875  589123 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:41:40.643470  589123 config.go:182] Loaded profile config "no-preload-333352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:41:40.643591  589123 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:41:40.680100  589123 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:41:40.680226  589123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:41:40.756616  589123 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:41:40.747142182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:41:40.756723  589123 docker.go:319] overlay module found
	I1213 11:41:40.759961  589123 out.go:179] * Using the docker driver based on user configuration
	I1213 11:41:40.762776  589123 start.go:309] selected driver: docker
	I1213 11:41:40.762800  589123 start.go:927] validating driver "docker" against <nil>
	I1213 11:41:40.762814  589123 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:41:40.763539  589123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:41:40.821660  589123 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:41:40.812604764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:41:40.821819  589123 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 11:41:40.821853  589123 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 11:41:40.822076  589123 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 11:41:40.824951  589123 out.go:179] * Using Docker driver with root privileges
	I1213 11:41:40.827804  589123 cni.go:84] Creating CNI manager for ""
	I1213 11:41:40.827876  589123 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:41:40.827892  589123 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 11:41:40.827980  589123 start.go:353] cluster config:
	{Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:41:40.831095  589123 out.go:179] * Starting "newest-cni-796924" primary control-plane node in "newest-cni-796924" cluster
	I1213 11:41:40.833926  589123 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 11:41:40.836836  589123 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:41:40.839602  589123 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:41:40.839653  589123 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 11:41:40.839668  589123 cache.go:65] Caching tarball of preloaded images
	I1213 11:41:40.839677  589123 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:41:40.839751  589123 preload.go:238] Found /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 11:41:40.839761  589123 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 11:41:40.839868  589123 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/config.json ...
	I1213 11:41:40.839885  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/config.json: {Name:mk0ce282ac2d53ca7f0abb05f9aee384330b83fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:40.859227  589123 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:41:40.859251  589123 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:41:40.859271  589123 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:41:40.859304  589123 start.go:360] acquireMachinesLock for newest-cni-796924: {Name:mkb23dc851632c47983afd0f3cb215d071a4c6d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:41:40.859430  589123 start.go:364] duration metric: took 105.773µs to acquireMachinesLock for "newest-cni-796924"
	I1213 11:41:40.859462  589123 start.go:93] Provisioning new machine with config: &{Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 11:41:40.859540  589123 start.go:125] createHost starting for "" (driver="docker")
	I1213 11:41:40.862994  589123 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 11:41:40.863248  589123 start.go:159] libmachine.API.Create for "newest-cni-796924" (driver="docker")
	I1213 11:41:40.863284  589123 client.go:173] LocalClient.Create starting
	I1213 11:41:40.863374  589123 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem
	I1213 11:41:40.863413  589123 main.go:143] libmachine: Decoding PEM data...
	I1213 11:41:40.863433  589123 main.go:143] libmachine: Parsing certificate...
	I1213 11:41:40.863487  589123 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem
	I1213 11:41:40.863508  589123 main.go:143] libmachine: Decoding PEM data...
	I1213 11:41:40.863527  589123 main.go:143] libmachine: Parsing certificate...
	I1213 11:41:40.863921  589123 cli_runner.go:164] Run: docker network inspect newest-cni-796924 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 11:41:40.879900  589123 cli_runner.go:211] docker network inspect newest-cni-796924 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 11:41:40.879981  589123 network_create.go:284] running [docker network inspect newest-cni-796924] to gather additional debugging logs...
	I1213 11:41:40.880002  589123 cli_runner.go:164] Run: docker network inspect newest-cni-796924
	W1213 11:41:40.894997  589123 cli_runner.go:211] docker network inspect newest-cni-796924 returned with exit code 1
	I1213 11:41:40.895050  589123 network_create.go:287] error running [docker network inspect newest-cni-796924]: docker network inspect newest-cni-796924: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-796924 not found
	I1213 11:41:40.895065  589123 network_create.go:289] output of [docker network inspect newest-cni-796924]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-796924 not found
	
	** /stderr **
	I1213 11:41:40.895186  589123 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:41:40.912767  589123 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-381e4ce3c9ab IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:2d:23:57:0e:cc} reservation:<nil>}
	I1213 11:41:40.913250  589123 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bd1082d121b0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7a:42:ce:41:ea:ae} reservation:<nil>}
	I1213 11:41:40.913761  589123 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ebeb7162e340 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:cf:aa:41:ac:19} reservation:<nil>}
	I1213 11:41:40.914391  589123 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019db560}
	I1213 11:41:40.914425  589123 network_create.go:124] attempt to create docker network newest-cni-796924 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 11:41:40.914493  589123 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-796924 newest-cni-796924
	I1213 11:41:40.975597  589123 network_create.go:108] docker network newest-cni-796924 192.168.76.0/24 created
	I1213 11:41:40.975632  589123 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-796924" container
	I1213 11:41:40.975710  589123 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 11:41:40.992098  589123 cli_runner.go:164] Run: docker volume create newest-cni-796924 --label name.minikube.sigs.k8s.io=newest-cni-796924 --label created_by.minikube.sigs.k8s.io=true
	I1213 11:41:41.011676  589123 oci.go:103] Successfully created a docker volume newest-cni-796924
	I1213 11:41:41.011779  589123 cli_runner.go:164] Run: docker run --rm --name newest-cni-796924-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-796924 --entrypoint /usr/bin/test -v newest-cni-796924:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 11:41:41.562335  589123 oci.go:107] Successfully prepared a docker volume newest-cni-796924
	I1213 11:41:41.562406  589123 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:41:41.562420  589123 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 11:41:41.562520  589123 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-796924:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 11:41:45.483539  589123 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-796924:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.920978562s)
	I1213 11:41:45.483577  589123 kic.go:203] duration metric: took 3.921153184s to extract preloaded images to volume ...
	W1213 11:41:45.483725  589123 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 11:41:45.483849  589123 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 11:41:45.544786  589123 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-796924 --name newest-cni-796924 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-796924 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-796924 --network newest-cni-796924 --ip 192.168.76.2 --volume newest-cni-796924:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 11:41:45.837300  589123 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Running}}
	I1213 11:41:45.859478  589123 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:41:45.883282  589123 cli_runner.go:164] Run: docker exec newest-cni-796924 stat /var/lib/dpkg/alternatives/iptables
	I1213 11:41:45.939942  589123 oci.go:144] the created container "newest-cni-796924" has a running status.
	I1213 11:41:45.939979  589123 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa...
	I1213 11:41:46.475943  589123 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 11:41:46.497112  589123 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:41:46.514893  589123 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 11:41:46.514914  589123 kic_runner.go:114] Args: [docker exec --privileged newest-cni-796924 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 11:41:46.555872  589123 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:41:46.582882  589123 machine.go:94] provisionDockerMachine start ...
	I1213 11:41:46.583000  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:46.601258  589123 main.go:143] libmachine: Using SSH client type: native
	I1213 11:41:46.601613  589123 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1213 11:41:46.601628  589123 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:41:46.602167  589123 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43966->127.0.0.1:33430: read: connection reset by peer
	I1213 11:41:49.762525  589123 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-796924
	
	I1213 11:41:49.762610  589123 ubuntu.go:182] provisioning hostname "newest-cni-796924"
	I1213 11:41:49.762751  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:49.782841  589123 main.go:143] libmachine: Using SSH client type: native
	I1213 11:41:49.783298  589123 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1213 11:41:49.783328  589123 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-796924 && echo "newest-cni-796924" | sudo tee /etc/hostname
	I1213 11:41:49.948352  589123 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-796924
	
	I1213 11:41:49.948435  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:49.965977  589123 main.go:143] libmachine: Using SSH client type: native
	I1213 11:41:49.966316  589123 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1213 11:41:49.966341  589123 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-796924' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-796924/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-796924' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:41:50.147128  589123 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:41:50.147171  589123 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 11:41:50.147219  589123 ubuntu.go:190] setting up certificates
	I1213 11:41:50.147230  589123 provision.go:84] configureAuth start
	I1213 11:41:50.147297  589123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:41:50.165702  589123 provision.go:143] copyHostCerts
	I1213 11:41:50.165784  589123 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 11:41:50.165802  589123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 11:41:50.165914  589123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 11:41:50.166068  589123 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 11:41:50.166080  589123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 11:41:50.166123  589123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 11:41:50.166210  589123 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 11:41:50.166226  589123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 11:41:50.166257  589123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 11:41:50.166335  589123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.newest-cni-796924 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-796924]
	I1213 11:41:50.575993  589123 provision.go:177] copyRemoteCerts
	I1213 11:41:50.576089  589123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:41:50.576156  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:50.593521  589123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:41:50.702596  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 11:41:50.720289  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:41:50.738001  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 11:41:50.755303  589123 provision.go:87] duration metric: took 608.049982ms to configureAuth
	I1213 11:41:50.755333  589123 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:41:50.755533  589123 config.go:182] Loaded profile config "newest-cni-796924": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:41:50.755547  589123 machine.go:97] duration metric: took 4.172642608s to provisionDockerMachine
	I1213 11:41:50.755555  589123 client.go:176] duration metric: took 9.892260099s to LocalClient.Create
	I1213 11:41:50.755575  589123 start.go:167] duration metric: took 9.892327365s to libmachine.API.Create "newest-cni-796924"
	I1213 11:41:50.755586  589123 start.go:293] postStartSetup for "newest-cni-796924" (driver="docker")
	I1213 11:41:50.755596  589123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:41:50.755647  589123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:41:50.755689  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:50.772594  589123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:41:50.878962  589123 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:41:50.882465  589123 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:41:50.882496  589123 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:41:50.882513  589123 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 11:41:50.882569  589123 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 11:41:50.882649  589123 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 11:41:50.882784  589123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:41:50.890136  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:41:50.909142  589123 start.go:296] duration metric: took 153.541145ms for postStartSetup
	I1213 11:41:50.909520  589123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:41:50.926272  589123 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/config.json ...
	I1213 11:41:50.926557  589123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:41:50.926615  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:50.943196  589123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:41:51.043825  589123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:41:51.048871  589123 start.go:128] duration metric: took 10.189316484s to createHost
	I1213 11:41:51.048902  589123 start.go:83] releasing machines lock for "newest-cni-796924", held for 10.189458492s
	I1213 11:41:51.048990  589123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:41:51.066001  589123 ssh_runner.go:195] Run: cat /version.json
	I1213 11:41:51.066070  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:51.066359  589123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:41:51.066428  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:51.089473  589123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:41:51.096259  589123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:41:51.199327  589123 ssh_runner.go:195] Run: systemctl --version
	I1213 11:41:51.296961  589123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:41:51.301350  589123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:41:51.301424  589123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:41:51.333583  589123 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 11:41:51.333658  589123 start.go:496] detecting cgroup driver to use...
	I1213 11:41:51.333709  589123 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:41:51.333790  589123 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:41:51.348970  589123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:41:51.362099  589123 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:41:51.362222  589123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:41:51.379694  589123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:41:51.398786  589123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:41:51.510657  589123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:41:51.629156  589123 docker.go:234] disabling docker service ...
	I1213 11:41:51.629223  589123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:41:51.650731  589123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:41:51.664169  589123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:41:51.793148  589123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:41:51.904796  589123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:41:51.919458  589123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:41:51.942455  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 11:41:51.956281  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:41:51.965941  589123 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:41:51.966013  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:41:51.977493  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:41:51.987404  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:41:52.000948  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:41:52.013279  589123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:41:52.023039  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:41:52.032853  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:41:52.042519  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:41:52.052346  589123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:41:52.060125  589123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:41:52.068281  589123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:41:52.179247  589123 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:41:52.320321  589123 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 11:41:52.320429  589123 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 11:41:52.324439  589123 start.go:564] Will wait 60s for crictl version
	I1213 11:41:52.324501  589123 ssh_runner.go:195] Run: which crictl
	I1213 11:41:52.328708  589123 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:41:52.357589  589123 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 11:41:52.357683  589123 ssh_runner.go:195] Run: containerd --version
	I1213 11:41:52.383274  589123 ssh_runner.go:195] Run: containerd --version
	I1213 11:41:52.413360  589123 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 11:41:52.416432  589123 cli_runner.go:164] Run: docker network inspect newest-cni-796924 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:41:52.432557  589123 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 11:41:52.436286  589123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:41:52.449106  589123 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 11:41:52.452071  589123 kubeadm.go:884] updating cluster {Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:41:52.452217  589123 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:41:52.452308  589123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:41:52.477318  589123 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 11:41:52.477343  589123 containerd.go:534] Images already preloaded, skipping extraction
	I1213 11:41:52.477404  589123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:41:52.505926  589123 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 11:41:52.505953  589123 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:41:52.505961  589123 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 11:41:52.506065  589123 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-796924 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:41:52.506135  589123 ssh_runner.go:195] Run: sudo crictl info
	I1213 11:41:52.531708  589123 cni.go:84] Creating CNI manager for ""
	I1213 11:41:52.531733  589123 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:41:52.531753  589123 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 11:41:52.531776  589123 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-796924 NodeName:newest-cni-796924 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:41:52.531907  589123 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-796924"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:41:52.531983  589123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:41:52.540473  589123 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:41:52.540571  589123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:41:52.548635  589123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 11:41:52.562445  589123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:41:52.579341  589123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 11:41:52.593144  589123 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:41:52.596805  589123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:41:52.607006  589123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:41:52.727771  589123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:41:52.751356  589123 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924 for IP: 192.168.76.2
	I1213 11:41:52.751381  589123 certs.go:195] generating shared ca certs ...
	I1213 11:41:52.751399  589123 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:52.751547  589123 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 11:41:52.751597  589123 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 11:41:52.751607  589123 certs.go:257] generating profile certs ...
	I1213 11:41:52.751662  589123 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.key
	I1213 11:41:52.751679  589123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.crt with IP's: []
	I1213 11:41:53.086363  589123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.crt ...
	I1213 11:41:53.086398  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.crt: {Name:mk66b963bdd54f4b935fe2fc7acd97dde553339b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.086603  589123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.key ...
	I1213 11:41:53.086620  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.key: {Name:mk98638456845d9072484c2ea9cf4343d6af1634 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.086739  589123 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key.ced45374
	I1213 11:41:53.086760  589123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt.ced45374 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1213 11:41:53.240504  589123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt.ced45374 ...
	I1213 11:41:53.240537  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt.ced45374: {Name:mkabd19bc7e960d2c555d82ddd752e663c8f6cd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.240708  589123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key.ced45374 ...
	I1213 11:41:53.240722  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key.ced45374: {Name:mk3e4fdd1c06bfd329cc4a39da890d8da6317b83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.240816  589123 certs.go:382] copying /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt.ced45374 -> /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt
	I1213 11:41:53.240898  589123 certs.go:386] copying /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key.ced45374 -> /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key
	I1213 11:41:53.240954  589123 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key
	I1213 11:41:53.240973  589123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.crt with IP's: []
	I1213 11:41:53.471880  589123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.crt ...
	I1213 11:41:53.471916  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.crt: {Name:mkc7d686a714b0dc00954cf052cbfbc601a1b715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.472127  589123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key ...
	I1213 11:41:53.472146  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key: {Name:mk3b8f645a4e8504ec9bd2eed45071861029af54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.472358  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 11:41:53.472406  589123 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 11:41:53.472425  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:41:53.472456  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:41:53.472484  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:41:53.472514  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 11:41:53.472565  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:41:53.473197  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:41:53.497619  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:41:53.520852  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:41:53.540142  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:41:53.558660  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 11:41:53.576992  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:41:53.595267  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:41:53.613794  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:41:53.632148  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 11:41:53.650879  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:41:53.676104  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 11:41:53.697746  589123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:41:53.714070  589123 ssh_runner.go:195] Run: openssl version
	I1213 11:41:53.722093  589123 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 11:41:53.730578  589123 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 11:41:53.738421  589123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 11:41:53.742437  589123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 11:41:53.742533  589123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 11:41:53.786290  589123 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:41:53.794147  589123 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3089152.pem /etc/ssl/certs/3ec20f2e.0
	I1213 11:41:53.802168  589123 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:41:53.809707  589123 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:41:53.817606  589123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:41:53.821659  589123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:41:53.821728  589123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:41:53.863050  589123 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:41:53.870933  589123 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 11:41:53.878551  589123 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 11:41:53.886061  589123 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 11:41:53.893998  589123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 11:41:53.898172  589123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 11:41:53.898239  589123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 11:41:53.939253  589123 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:41:53.946895  589123 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/308915.pem /etc/ssl/certs/51391683.0
	I1213 11:41:53.954761  589123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:41:53.958396  589123 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 11:41:53.958494  589123 kubeadm.go:401] StartCluster: {Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:41:53.958606  589123 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 11:41:53.958783  589123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:41:53.989529  589123 cri.go:89] found id: ""
	I1213 11:41:53.989604  589123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:41:53.998028  589123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:41:54.008661  589123 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:41:54.008741  589123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:41:54.018340  589123 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:41:54.018369  589123 kubeadm.go:158] found existing configuration files:
	
	I1213 11:41:54.018431  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:41:54.027307  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:41:54.027393  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:41:54.036120  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:41:54.044655  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:41:54.044733  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:41:54.053302  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:41:54.061899  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:41:54.061991  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:41:54.070981  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:41:54.079553  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:41:54.079622  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:41:54.087398  589123 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:41:54.133199  589123 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:41:54.133519  589123 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:41:54.230705  589123 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:41:54.230782  589123 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:41:54.230824  589123 kubeadm.go:319] OS: Linux
	I1213 11:41:54.230875  589123 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:41:54.230929  589123 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:41:54.230979  589123 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:41:54.231032  589123 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:41:54.231083  589123 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:41:54.231135  589123 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:41:54.231184  589123 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:41:54.231236  589123 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:41:54.231285  589123 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:41:54.298600  589123 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:41:54.298731  589123 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:41:54.298837  589123 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:41:54.307104  589123 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:41:54.313608  589123 out.go:252]   - Generating certificates and keys ...
	I1213 11:41:54.313778  589123 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:41:54.313890  589123 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:41:54.510481  589123 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 11:41:54.575310  589123 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 11:41:54.686709  589123 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 11:41:54.914237  589123 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 11:41:55.329374  589123 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 11:41:55.329538  589123 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-796924] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 11:41:55.443297  589123 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 11:41:55.443660  589123 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-796924] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 11:41:55.929252  589123 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 11:41:56.099892  589123 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 11:41:56.662486  589123 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 11:41:56.662923  589123 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:41:56.728098  589123 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:41:56.987601  589123 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:41:57.419088  589123 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:41:57.640413  589123 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:41:58.149864  589123 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:41:58.150638  589123 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:41:58.153451  589123 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:41:58.157369  589123 out.go:252]   - Booting up control plane ...
	I1213 11:41:58.157498  589123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:41:58.157584  589123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:41:58.157650  589123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:41:58.194714  589123 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:41:58.194860  589123 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:41:58.202073  589123 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:41:58.202506  589123 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:41:58.202564  589123 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:41:58.355362  589123 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:41:58.355487  589123 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:45:15.897209  568526 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000248006s
	I1213 11:45:15.897238  568526 kubeadm.go:319] 
	I1213 11:45:15.897296  568526 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:45:15.897329  568526 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:45:15.897444  568526 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:45:15.897460  568526 kubeadm.go:319] 
	I1213 11:45:15.897565  568526 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:45:15.897602  568526 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:45:15.897639  568526 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:45:15.897647  568526 kubeadm.go:319] 
	I1213 11:45:15.901779  568526 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:45:15.902208  568526 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:45:15.902322  568526 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:45:15.902560  568526 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:45:15.902570  568526 kubeadm.go:319] 
	I1213 11:45:15.902639  568526 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 11:45:15.902721  568526 kubeadm.go:403] duration metric: took 8m6.858200115s to StartCluster
	I1213 11:45:15.902773  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:45:15.902832  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:45:15.928207  568526 cri.go:89] found id: ""
	I1213 11:45:15.928245  568526 logs.go:282] 0 containers: []
	W1213 11:45:15.928254  568526 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:45:15.928262  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:45:15.928320  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:45:15.954471  568526 cri.go:89] found id: ""
	I1213 11:45:15.954508  568526 logs.go:282] 0 containers: []
	W1213 11:45:15.954520  568526 logs.go:284] No container was found matching "etcd"
	I1213 11:45:15.954531  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:45:15.954610  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:45:15.979197  568526 cri.go:89] found id: ""
	I1213 11:45:15.979226  568526 logs.go:282] 0 containers: []
	W1213 11:45:15.979236  568526 logs.go:284] No container was found matching "coredns"
	I1213 11:45:15.979243  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:45:15.979315  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:45:16.008075  568526 cri.go:89] found id: ""
	I1213 11:45:16.008098  568526 logs.go:282] 0 containers: []
	W1213 11:45:16.008107  568526 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:45:16.008118  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:45:16.008193  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:45:16.034169  568526 cri.go:89] found id: ""
	I1213 11:45:16.034191  568526 logs.go:282] 0 containers: []
	W1213 11:45:16.034199  568526 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:45:16.034207  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:45:16.034265  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:45:16.058826  568526 cri.go:89] found id: ""
	I1213 11:45:16.058854  568526 logs.go:282] 0 containers: []
	W1213 11:45:16.058862  568526 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:45:16.058869  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:45:16.058928  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:45:16.083123  568526 cri.go:89] found id: ""
	I1213 11:45:16.083151  568526 logs.go:282] 0 containers: []
	W1213 11:45:16.083160  568526 logs.go:284] No container was found matching "kindnet"
	I1213 11:45:16.083171  568526 logs.go:123] Gathering logs for dmesg ...
	I1213 11:45:16.083184  568526 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:45:16.100676  568526 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:45:16.100707  568526 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:45:16.166022  568526 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:45:16.157896    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.158620    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.160237    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.160701    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.162431    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:45:16.157896    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.158620    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.160237    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.160701    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.162431    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:45:16.166047  568526 logs.go:123] Gathering logs for containerd ...
	I1213 11:45:16.166060  568526 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:45:16.207842  568526 logs.go:123] Gathering logs for container status ...
	I1213 11:45:16.207880  568526 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:45:16.235350  568526 logs.go:123] Gathering logs for kubelet ...
	I1213 11:45:16.235376  568526 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 11:45:16.294386  568526 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248006s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 11:45:16.294456  568526 out.go:285] * 
	W1213 11:45:16.294516  568526 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248006s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:45:16.294544  568526 out.go:285] * 
	W1213 11:45:16.296685  568526 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:45:16.302469  568526 out.go:203] 
	W1213 11:45:16.306292  568526 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248006s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:45:16.306365  568526 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 11:45:16.306395  568526 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 11:45:16.309964  568526 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 11:36:56 no-preload-333352 containerd[759]: time="2025-12-13T11:36:56.089357963Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:36:57 no-preload-333352 containerd[759]: time="2025-12-13T11:36:57.511622237Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 13 11:36:57 no-preload-333352 containerd[759]: time="2025-12-13T11:36:57.516065454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 13 11:36:57 no-preload-333352 containerd[759]: time="2025-12-13T11:36:57.531326305Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:36:57 no-preload-333352 containerd[759]: time="2025-12-13T11:36:57.531822769Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:36:58 no-preload-333352 containerd[759]: time="2025-12-13T11:36:58.968197722Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 13 11:36:58 no-preload-333352 containerd[759]: time="2025-12-13T11:36:58.971116854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 13 11:36:58 no-preload-333352 containerd[759]: time="2025-12-13T11:36:58.980545274Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:36:58 no-preload-333352 containerd[759]: time="2025-12-13T11:36:58.981362816Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:02 no-preload-333352 containerd[759]: time="2025-12-13T11:37:02.214365383Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 13 11:37:02 no-preload-333352 containerd[759]: time="2025-12-13T11:37:02.217628084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 13 11:37:02 no-preload-333352 containerd[759]: time="2025-12-13T11:37:02.241331613Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:02 no-preload-333352 containerd[759]: time="2025-12-13T11:37:02.242087346Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:03 no-preload-333352 containerd[759]: time="2025-12-13T11:37:03.217262130Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 13 11:37:03 no-preload-333352 containerd[759]: time="2025-12-13T11:37:03.220012055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 13 11:37:03 no-preload-333352 containerd[759]: time="2025-12-13T11:37:03.228672623Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:03 no-preload-333352 containerd[759]: time="2025-12-13T11:37:03.229475338Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:04 no-preload-333352 containerd[759]: time="2025-12-13T11:37:04.630418294Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 13 11:37:04 no-preload-333352 containerd[759]: time="2025-12-13T11:37:04.633143086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 13 11:37:04 no-preload-333352 containerd[759]: time="2025-12-13T11:37:04.641567747Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:04 no-preload-333352 containerd[759]: time="2025-12-13T11:37:04.642255121Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:04 no-preload-333352 containerd[759]: time="2025-12-13T11:37:04.996924296Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 13 11:37:05 no-preload-333352 containerd[759]: time="2025-12-13T11:37:05.004833973Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 13 11:37:05 no-preload-333352 containerd[759]: time="2025-12-13T11:37:05.013913352Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:05 no-preload-333352 containerd[759]: time="2025-12-13T11:37:05.014372006Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:45:19.019890    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:19.020276    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:19.021805    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:19.022133    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:19.023619    5679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +12.907693] overlayfs: idmapped layers are currently not supported
	[Dec13 09:25] overlayfs: idmapped layers are currently not supported
	[ +26.192425] overlayfs: idmapped layers are currently not supported
	[Dec13 09:26] overlayfs: idmapped layers are currently not supported
	[ +25.729788] overlayfs: idmapped layers are currently not supported
	[Dec13 09:27] overlayfs: idmapped layers are currently not supported
	[Dec13 09:28] overlayfs: idmapped layers are currently not supported
	[Dec13 09:31] overlayfs: idmapped layers are currently not supported
	[Dec13 09:32] overlayfs: idmapped layers are currently not supported
	[ +17.979093] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec13 09:43] overlayfs: idmapped layers are currently not supported
	[Dec13 09:45] overlayfs: idmapped layers are currently not supported
	[ +25.885102] overlayfs: idmapped layers are currently not supported
	[Dec13 09:46] overlayfs: idmapped layers are currently not supported
	[ +22.078149] overlayfs: idmapped layers are currently not supported
	[Dec13 09:47] overlayfs: idmapped layers are currently not supported
	[Dec13 09:48] overlayfs: idmapped layers are currently not supported
	[Dec13 09:49] overlayfs: idmapped layers are currently not supported
	[Dec13 09:51] overlayfs: idmapped layers are currently not supported
	[ +17.043564] overlayfs: idmapped layers are currently not supported
	[Dec13 09:52] overlayfs: idmapped layers are currently not supported
	[Dec13 09:53] overlayfs: idmapped layers are currently not supported
	[Dec13 10:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:19] hrtimer: interrupt took 21247146 ns
	
	
	==> kernel <==
	 11:45:19 up  4:27,  0 user,  load average: 0.97, 1.38, 1.88
	Linux no-preload-333352 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 11:45:15 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:45:16 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 13 11:45:16 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:45:16 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:45:16 no-preload-333352 kubelet[5440]: E1213 11:45:16.527761    5440 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:45:16 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:45:16 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:45:17 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 13 11:45:17 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:45:17 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:45:17 no-preload-333352 kubelet[5489]: E1213 11:45:17.237061    5489 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:45:17 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:45:17 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:45:17 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 13 11:45:17 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:45:17 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:45:18 no-preload-333352 kubelet[5573]: E1213 11:45:18.019974    5573 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:45:18 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:45:18 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:45:18 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 13 11:45:18 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:45:18 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:45:18 no-preload-333352 kubelet[5601]: E1213 11:45:18.751381    5601 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:45:18 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:45:18 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-333352 -n no-preload-333352
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-333352 -n no-preload-333352: exit status 6 (380.321219ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 11:45:19.508172  594583 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-333352" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-333352" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-333352
helpers_test.go:244: (dbg) docker inspect no-preload-333352:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db",
	        "Created": "2025-12-13T11:36:44.52795509Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 568910,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:36:44.610473104Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/hostname",
	        "HostsPath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/hosts",
	        "LogPath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db-json.log",
	        "Name": "/no-preload-333352",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-333352:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-333352",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db",
	                "LowerDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891-init/diff:/var/lib/docker/overlay2/5d0ca86320f2c06765995d644a73af30cd6ddb36f5c1a2a6b1ebb24af53cdabd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-333352",
	                "Source": "/var/lib/docker/volumes/no-preload-333352/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-333352",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-333352",
	                "name.minikube.sigs.k8s.io": "no-preload-333352",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "661b691bd6512fd4efbd202820a9bae1c5beb21cce06578707e71b64c02a0d52",
	            "SandboxKey": "/var/run/docker/netns/661b691bd651",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33406"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33409"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33407"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33408"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-333352": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:23:72:9e:c3:20",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ee20fc50f482b31273047147a2f419c36704bb98933537d0ac5901a560402043",
	                    "EndpointID": "eaefa46f6237ec9d0c60ef1c735019996dda65756a613e136b17ca120c60027b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-333352",
	                        "ca124efb8aeb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-333352 -n no-preload-333352
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-333352 -n no-preload-333352: exit status 6 (306.416226ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 11:45:19.832553  594670 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-333352" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-333352 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p no-preload-333352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:36 UTC │                     │
	│ start   │ -p cert-expiration-086397 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                            │ cert-expiration-086397       │ jenkins │ v1.37.0 │ 13 Dec 25 11:36 UTC │ 13 Dec 25 11:36 UTC │
	│ delete  │ -p cert-expiration-086397                                                                                                                                                                                                                                  │ cert-expiration-086397       │ jenkins │ v1.37.0 │ 13 Dec 25 11:36 UTC │ 13 Dec 25 11:36 UTC │
	│ start   │ -p embed-certs-951675 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:36 UTC │ 13 Dec 25 11:37 UTC │
	│ addons  │ enable metrics-server -p embed-certs-951675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:38 UTC │
	│ stop    │ -p embed-certs-951675 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:38 UTC │
	│ addons  │ enable dashboard -p embed-certs-951675 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:38 UTC │
	│ start   │ -p embed-certs-951675 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:39 UTC │
	│ image   │ embed-certs-951675 image list --format=json                                                                                                                                                                                                                │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ pause   │ -p embed-certs-951675 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ unpause │ -p embed-certs-951675 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p embed-certs-951675                                                                                                                                                                                                                                      │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p embed-certs-951675                                                                                                                                                                                                                                      │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p disable-driver-mounts-823668                                                                                                                                                                                                                            │ disable-driver-mounts-823668 │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ start   │ -p default-k8s-diff-port-191845 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:40 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-191845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ stop    │ -p default-k8s-diff-port-191845 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-191845 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ start   │ -p default-k8s-diff-port-191845 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:41 UTC │
	│ image   │ default-k8s-diff-port-191845 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ pause   │ -p default-k8s-diff-port-191845 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ unpause │ -p default-k8s-diff-port-191845 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ delete  │ -p default-k8s-diff-port-191845                                                                                                                                                                                                                            │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ delete  │ -p default-k8s-diff-port-191845                                                                                                                                                                                                                            │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ start   │ -p newest-cni-796924 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:41:40
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:41:40.611522  589123 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:41:40.611651  589123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:41:40.611663  589123 out.go:374] Setting ErrFile to fd 2...
	I1213 11:41:40.611668  589123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:41:40.611912  589123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 11:41:40.612341  589123 out.go:368] Setting JSON to false
	I1213 11:41:40.613214  589123 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":15853,"bootTime":1765610247,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 11:41:40.613281  589123 start.go:143] virtualization:  
	I1213 11:41:40.617550  589123 out.go:179] * [newest-cni-796924] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:41:40.621011  589123 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:41:40.621109  589123 notify.go:221] Checking for updates...
	I1213 11:41:40.627428  589123 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:41:40.630647  589123 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:41:40.633653  589123 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 11:41:40.636743  589123 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:41:40.639875  589123 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:41:40.643470  589123 config.go:182] Loaded profile config "no-preload-333352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:41:40.643591  589123 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:41:40.680100  589123 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:41:40.680226  589123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:41:40.756616  589123 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:41:40.747142182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:41:40.756723  589123 docker.go:319] overlay module found
	I1213 11:41:40.759961  589123 out.go:179] * Using the docker driver based on user configuration
	I1213 11:41:40.762776  589123 start.go:309] selected driver: docker
	I1213 11:41:40.762800  589123 start.go:927] validating driver "docker" against <nil>
	I1213 11:41:40.762814  589123 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:41:40.763539  589123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:41:40.821660  589123 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:41:40.812604764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:41:40.821819  589123 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 11:41:40.821853  589123 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 11:41:40.822076  589123 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 11:41:40.824951  589123 out.go:179] * Using Docker driver with root privileges
	I1213 11:41:40.827804  589123 cni.go:84] Creating CNI manager for ""
	I1213 11:41:40.827876  589123 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:41:40.827892  589123 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 11:41:40.827980  589123 start.go:353] cluster config:
	{Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:41:40.831095  589123 out.go:179] * Starting "newest-cni-796924" primary control-plane node in "newest-cni-796924" cluster
	I1213 11:41:40.833926  589123 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 11:41:40.836836  589123 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:41:40.839602  589123 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:41:40.839653  589123 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 11:41:40.839668  589123 cache.go:65] Caching tarball of preloaded images
	I1213 11:41:40.839677  589123 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:41:40.839751  589123 preload.go:238] Found /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 11:41:40.839761  589123 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 11:41:40.839868  589123 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/config.json ...
	I1213 11:41:40.839885  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/config.json: {Name:mk0ce282ac2d53ca7f0abb05f9aee384330b83fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:40.859227  589123 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:41:40.859251  589123 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:41:40.859271  589123 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:41:40.859304  589123 start.go:360] acquireMachinesLock for newest-cni-796924: {Name:mkb23dc851632c47983afd0f3cb215d071a4c6d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:41:40.859430  589123 start.go:364] duration metric: took 105.773µs to acquireMachinesLock for "newest-cni-796924"
	I1213 11:41:40.859462  589123 start.go:93] Provisioning new machine with config: &{Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 11:41:40.859540  589123 start.go:125] createHost starting for "" (driver="docker")
	I1213 11:41:40.862994  589123 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 11:41:40.863248  589123 start.go:159] libmachine.API.Create for "newest-cni-796924" (driver="docker")
	I1213 11:41:40.863284  589123 client.go:173] LocalClient.Create starting
	I1213 11:41:40.863374  589123 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem
	I1213 11:41:40.863413  589123 main.go:143] libmachine: Decoding PEM data...
	I1213 11:41:40.863433  589123 main.go:143] libmachine: Parsing certificate...
	I1213 11:41:40.863487  589123 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem
	I1213 11:41:40.863508  589123 main.go:143] libmachine: Decoding PEM data...
	I1213 11:41:40.863527  589123 main.go:143] libmachine: Parsing certificate...
	I1213 11:41:40.863921  589123 cli_runner.go:164] Run: docker network inspect newest-cni-796924 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 11:41:40.879900  589123 cli_runner.go:211] docker network inspect newest-cni-796924 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 11:41:40.879981  589123 network_create.go:284] running [docker network inspect newest-cni-796924] to gather additional debugging logs...
	I1213 11:41:40.880002  589123 cli_runner.go:164] Run: docker network inspect newest-cni-796924
	W1213 11:41:40.894997  589123 cli_runner.go:211] docker network inspect newest-cni-796924 returned with exit code 1
	I1213 11:41:40.895050  589123 network_create.go:287] error running [docker network inspect newest-cni-796924]: docker network inspect newest-cni-796924: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-796924 not found
	I1213 11:41:40.895065  589123 network_create.go:289] output of [docker network inspect newest-cni-796924]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-796924 not found
	
	** /stderr **
	I1213 11:41:40.895186  589123 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:41:40.912767  589123 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-381e4ce3c9ab IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:2d:23:57:0e:cc} reservation:<nil>}
	I1213 11:41:40.913250  589123 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bd1082d121b0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7a:42:ce:41:ea:ae} reservation:<nil>}
	I1213 11:41:40.913761  589123 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ebeb7162e340 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:cf:aa:41:ac:19} reservation:<nil>}
	I1213 11:41:40.914391  589123 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019db560}
	I1213 11:41:40.914425  589123 network_create.go:124] attempt to create docker network newest-cni-796924 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 11:41:40.914493  589123 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-796924 newest-cni-796924
	I1213 11:41:40.975597  589123 network_create.go:108] docker network newest-cni-796924 192.168.76.0/24 created
	I1213 11:41:40.975632  589123 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-796924" container
	I1213 11:41:40.975710  589123 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 11:41:40.992098  589123 cli_runner.go:164] Run: docker volume create newest-cni-796924 --label name.minikube.sigs.k8s.io=newest-cni-796924 --label created_by.minikube.sigs.k8s.io=true
	I1213 11:41:41.011676  589123 oci.go:103] Successfully created a docker volume newest-cni-796924
	I1213 11:41:41.011779  589123 cli_runner.go:164] Run: docker run --rm --name newest-cni-796924-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-796924 --entrypoint /usr/bin/test -v newest-cni-796924:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 11:41:41.562335  589123 oci.go:107] Successfully prepared a docker volume newest-cni-796924
	I1213 11:41:41.562406  589123 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:41:41.562420  589123 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 11:41:41.562520  589123 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-796924:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 11:41:45.483539  589123 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-796924:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.920978562s)
	I1213 11:41:45.483577  589123 kic.go:203] duration metric: took 3.921153184s to extract preloaded images to volume ...
	W1213 11:41:45.483725  589123 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 11:41:45.483849  589123 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 11:41:45.544786  589123 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-796924 --name newest-cni-796924 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-796924 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-796924 --network newest-cni-796924 --ip 192.168.76.2 --volume newest-cni-796924:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 11:41:45.837300  589123 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Running}}
	I1213 11:41:45.859478  589123 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:41:45.883282  589123 cli_runner.go:164] Run: docker exec newest-cni-796924 stat /var/lib/dpkg/alternatives/iptables
	I1213 11:41:45.939942  589123 oci.go:144] the created container "newest-cni-796924" has a running status.
	I1213 11:41:45.939979  589123 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa...
	I1213 11:41:46.475943  589123 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 11:41:46.497112  589123 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:41:46.514893  589123 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 11:41:46.514914  589123 kic_runner.go:114] Args: [docker exec --privileged newest-cni-796924 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 11:41:46.555872  589123 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:41:46.582882  589123 machine.go:94] provisionDockerMachine start ...
	I1213 11:41:46.583000  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:46.601258  589123 main.go:143] libmachine: Using SSH client type: native
	I1213 11:41:46.601613  589123 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1213 11:41:46.601628  589123 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:41:46.602167  589123 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43966->127.0.0.1:33430: read: connection reset by peer
	I1213 11:41:49.762525  589123 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-796924
	
	I1213 11:41:49.762610  589123 ubuntu.go:182] provisioning hostname "newest-cni-796924"
	I1213 11:41:49.762751  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:49.782841  589123 main.go:143] libmachine: Using SSH client type: native
	I1213 11:41:49.783298  589123 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1213 11:41:49.783328  589123 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-796924 && echo "newest-cni-796924" | sudo tee /etc/hostname
	I1213 11:41:49.948352  589123 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-796924
	
	I1213 11:41:49.948435  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:49.965977  589123 main.go:143] libmachine: Using SSH client type: native
	I1213 11:41:49.966316  589123 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1213 11:41:49.966341  589123 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-796924' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-796924/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-796924' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:41:50.147128  589123 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:41:50.147171  589123 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 11:41:50.147219  589123 ubuntu.go:190] setting up certificates
	I1213 11:41:50.147230  589123 provision.go:84] configureAuth start
	I1213 11:41:50.147297  589123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:41:50.165702  589123 provision.go:143] copyHostCerts
	I1213 11:41:50.165784  589123 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 11:41:50.165802  589123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 11:41:50.165914  589123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 11:41:50.166068  589123 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 11:41:50.166080  589123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 11:41:50.166123  589123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 11:41:50.166210  589123 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 11:41:50.166226  589123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 11:41:50.166257  589123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 11:41:50.166335  589123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.newest-cni-796924 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-796924]
	I1213 11:41:50.575993  589123 provision.go:177] copyRemoteCerts
	I1213 11:41:50.576089  589123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:41:50.576156  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:50.593521  589123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:41:50.702596  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 11:41:50.720289  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:41:50.738001  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 11:41:50.755303  589123 provision.go:87] duration metric: took 608.049982ms to configureAuth
	I1213 11:41:50.755333  589123 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:41:50.755533  589123 config.go:182] Loaded profile config "newest-cni-796924": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:41:50.755547  589123 machine.go:97] duration metric: took 4.172642608s to provisionDockerMachine
	I1213 11:41:50.755555  589123 client.go:176] duration metric: took 9.892260099s to LocalClient.Create
	I1213 11:41:50.755575  589123 start.go:167] duration metric: took 9.892327365s to libmachine.API.Create "newest-cni-796924"
	I1213 11:41:50.755586  589123 start.go:293] postStartSetup for "newest-cni-796924" (driver="docker")
	I1213 11:41:50.755596  589123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:41:50.755647  589123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:41:50.755689  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:50.772594  589123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:41:50.878962  589123 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:41:50.882465  589123 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:41:50.882496  589123 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:41:50.882513  589123 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 11:41:50.882569  589123 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 11:41:50.882649  589123 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 11:41:50.882784  589123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:41:50.890136  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:41:50.909142  589123 start.go:296] duration metric: took 153.541145ms for postStartSetup
	I1213 11:41:50.909520  589123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:41:50.926272  589123 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/config.json ...
	I1213 11:41:50.926557  589123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:41:50.926615  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:50.943196  589123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:41:51.043825  589123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:41:51.048871  589123 start.go:128] duration metric: took 10.189316484s to createHost
	I1213 11:41:51.048902  589123 start.go:83] releasing machines lock for "newest-cni-796924", held for 10.189458492s
	I1213 11:41:51.048990  589123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:41:51.066001  589123 ssh_runner.go:195] Run: cat /version.json
	I1213 11:41:51.066070  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:51.066359  589123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:41:51.066428  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:51.089473  589123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:41:51.096259  589123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:41:51.199327  589123 ssh_runner.go:195] Run: systemctl --version
	I1213 11:41:51.296961  589123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:41:51.301350  589123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:41:51.301424  589123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:41:51.333583  589123 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 11:41:51.333658  589123 start.go:496] detecting cgroup driver to use...
	I1213 11:41:51.333709  589123 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:41:51.333790  589123 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:41:51.348970  589123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:41:51.362099  589123 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:41:51.362222  589123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:41:51.379694  589123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:41:51.398786  589123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:41:51.510657  589123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:41:51.629156  589123 docker.go:234] disabling docker service ...
	I1213 11:41:51.629223  589123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:41:51.650731  589123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:41:51.664169  589123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:41:51.793148  589123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:41:51.904796  589123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:41:51.919458  589123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:41:51.942455  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 11:41:51.956281  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:41:51.965941  589123 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:41:51.966013  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:41:51.977493  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:41:51.987404  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:41:52.000948  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:41:52.013279  589123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:41:52.023039  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:41:52.032853  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:41:52.042519  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:41:52.052346  589123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:41:52.060125  589123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:41:52.068281  589123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:41:52.179247  589123 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:41:52.320321  589123 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 11:41:52.320429  589123 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 11:41:52.324439  589123 start.go:564] Will wait 60s for crictl version
	I1213 11:41:52.324501  589123 ssh_runner.go:195] Run: which crictl
	I1213 11:41:52.328708  589123 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:41:52.357589  589123 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 11:41:52.357683  589123 ssh_runner.go:195] Run: containerd --version
	I1213 11:41:52.383274  589123 ssh_runner.go:195] Run: containerd --version
	I1213 11:41:52.413360  589123 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 11:41:52.416432  589123 cli_runner.go:164] Run: docker network inspect newest-cni-796924 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:41:52.432557  589123 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 11:41:52.436286  589123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:41:52.449106  589123 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 11:41:52.452071  589123 kubeadm.go:884] updating cluster {Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:41:52.452217  589123 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:41:52.452308  589123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:41:52.477318  589123 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 11:41:52.477343  589123 containerd.go:534] Images already preloaded, skipping extraction
	I1213 11:41:52.477404  589123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:41:52.505926  589123 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 11:41:52.505953  589123 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:41:52.505961  589123 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 11:41:52.506065  589123 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-796924 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:41:52.506135  589123 ssh_runner.go:195] Run: sudo crictl info
	I1213 11:41:52.531708  589123 cni.go:84] Creating CNI manager for ""
	I1213 11:41:52.531733  589123 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:41:52.531753  589123 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 11:41:52.531776  589123 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-796924 NodeName:newest-cni-796924 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:41:52.531907  589123 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-796924"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:41:52.531983  589123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:41:52.540473  589123 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:41:52.540571  589123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:41:52.548635  589123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 11:41:52.562445  589123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:41:52.579341  589123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 11:41:52.593144  589123 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:41:52.596805  589123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:41:52.607006  589123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:41:52.727771  589123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:41:52.751356  589123 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924 for IP: 192.168.76.2
	I1213 11:41:52.751381  589123 certs.go:195] generating shared ca certs ...
	I1213 11:41:52.751399  589123 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:52.751547  589123 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 11:41:52.751597  589123 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 11:41:52.751607  589123 certs.go:257] generating profile certs ...
	I1213 11:41:52.751662  589123 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.key
	I1213 11:41:52.751679  589123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.crt with IP's: []
	I1213 11:41:53.086363  589123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.crt ...
	I1213 11:41:53.086398  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.crt: {Name:mk66b963bdd54f4b935fe2fc7acd97dde553339b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.086603  589123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.key ...
	I1213 11:41:53.086620  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.key: {Name:mk98638456845d9072484c2ea9cf4343d6af1634 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.086739  589123 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key.ced45374
	I1213 11:41:53.086760  589123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt.ced45374 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1213 11:41:53.240504  589123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt.ced45374 ...
	I1213 11:41:53.240537  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt.ced45374: {Name:mkabd19bc7e960d2c555d82ddd752e663c8f6cd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.240708  589123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key.ced45374 ...
	I1213 11:41:53.240722  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key.ced45374: {Name:mk3e4fdd1c06bfd329cc4a39da890d8da6317b83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.240816  589123 certs.go:382] copying /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt.ced45374 -> /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt
	I1213 11:41:53.240898  589123 certs.go:386] copying /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key.ced45374 -> /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key
	I1213 11:41:53.240954  589123 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key
	I1213 11:41:53.240973  589123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.crt with IP's: []
	I1213 11:41:53.471880  589123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.crt ...
	I1213 11:41:53.471916  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.crt: {Name:mkc7d686a714b0dc00954cf052cbfbc601a1b715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.472127  589123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key ...
	I1213 11:41:53.472146  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key: {Name:mk3b8f645a4e8504ec9bd2eed45071861029af54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.472358  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 11:41:53.472406  589123 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 11:41:53.472425  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:41:53.472456  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:41:53.472484  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:41:53.472514  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 11:41:53.472565  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:41:53.473197  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:41:53.497619  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:41:53.520852  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:41:53.540142  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:41:53.558660  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 11:41:53.576992  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:41:53.595267  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:41:53.613794  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:41:53.632148  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 11:41:53.650879  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:41:53.676104  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 11:41:53.697746  589123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:41:53.714070  589123 ssh_runner.go:195] Run: openssl version
	I1213 11:41:53.722093  589123 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 11:41:53.730578  589123 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 11:41:53.738421  589123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 11:41:53.742437  589123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 11:41:53.742533  589123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 11:41:53.786290  589123 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:41:53.794147  589123 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3089152.pem /etc/ssl/certs/3ec20f2e.0
	I1213 11:41:53.802168  589123 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:41:53.809707  589123 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:41:53.817606  589123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:41:53.821659  589123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:41:53.821728  589123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:41:53.863050  589123 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:41:53.870933  589123 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 11:41:53.878551  589123 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 11:41:53.886061  589123 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 11:41:53.893998  589123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 11:41:53.898172  589123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 11:41:53.898239  589123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 11:41:53.939253  589123 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:41:53.946895  589123 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/308915.pem /etc/ssl/certs/51391683.0
	I1213 11:41:53.954761  589123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:41:53.958396  589123 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 11:41:53.958494  589123 kubeadm.go:401] StartCluster: {Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:41:53.958606  589123 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 11:41:53.958783  589123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:41:53.989529  589123 cri.go:89] found id: ""
	I1213 11:41:53.989604  589123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:41:53.998028  589123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:41:54.008661  589123 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:41:54.008741  589123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:41:54.018340  589123 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:41:54.018369  589123 kubeadm.go:158] found existing configuration files:
	
	I1213 11:41:54.018431  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:41:54.027307  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:41:54.027393  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:41:54.036120  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:41:54.044655  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:41:54.044733  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:41:54.053302  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:41:54.061899  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:41:54.061991  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:41:54.070981  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:41:54.079553  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:41:54.079622  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:41:54.087398  589123 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:41:54.133199  589123 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:41:54.133519  589123 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:41:54.230705  589123 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:41:54.230782  589123 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:41:54.230824  589123 kubeadm.go:319] OS: Linux
	I1213 11:41:54.230875  589123 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:41:54.230929  589123 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:41:54.230979  589123 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:41:54.231032  589123 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:41:54.231083  589123 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:41:54.231135  589123 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:41:54.231184  589123 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:41:54.231236  589123 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:41:54.231285  589123 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:41:54.298600  589123 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:41:54.298731  589123 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:41:54.298837  589123 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:41:54.307104  589123 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:41:54.313608  589123 out.go:252]   - Generating certificates and keys ...
	I1213 11:41:54.313778  589123 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:41:54.313890  589123 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:41:54.510481  589123 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 11:41:54.575310  589123 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 11:41:54.686709  589123 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 11:41:54.914237  589123 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 11:41:55.329374  589123 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 11:41:55.329538  589123 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-796924] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 11:41:55.443297  589123 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 11:41:55.443660  589123 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-796924] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 11:41:55.929252  589123 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 11:41:56.099892  589123 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 11:41:56.662486  589123 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 11:41:56.662923  589123 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:41:56.728098  589123 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:41:56.987601  589123 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:41:57.419088  589123 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:41:57.640413  589123 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:41:58.149864  589123 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:41:58.150638  589123 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:41:58.153451  589123 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:41:58.157369  589123 out.go:252]   - Booting up control plane ...
	I1213 11:41:58.157498  589123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:41:58.157584  589123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:41:58.157650  589123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:41:58.194714  589123 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:41:58.194860  589123 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:41:58.202073  589123 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:41:58.202506  589123 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:41:58.202564  589123 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:41:58.355362  589123 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:41:58.355487  589123 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:45:15.897209  568526 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000248006s
	I1213 11:45:15.897238  568526 kubeadm.go:319] 
	I1213 11:45:15.897296  568526 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:45:15.897329  568526 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:45:15.897444  568526 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:45:15.897460  568526 kubeadm.go:319] 
	I1213 11:45:15.897565  568526 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:45:15.897602  568526 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:45:15.897639  568526 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:45:15.897647  568526 kubeadm.go:319] 
	I1213 11:45:15.901779  568526 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:45:15.902208  568526 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:45:15.902322  568526 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:45:15.902560  568526 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:45:15.902570  568526 kubeadm.go:319] 
	I1213 11:45:15.902639  568526 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 11:45:15.902721  568526 kubeadm.go:403] duration metric: took 8m6.858200115s to StartCluster
	I1213 11:45:15.902773  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:45:15.902832  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:45:15.928207  568526 cri.go:89] found id: ""
	I1213 11:45:15.928245  568526 logs.go:282] 0 containers: []
	W1213 11:45:15.928254  568526 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:45:15.928262  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:45:15.928320  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:45:15.954471  568526 cri.go:89] found id: ""
	I1213 11:45:15.954508  568526 logs.go:282] 0 containers: []
	W1213 11:45:15.954520  568526 logs.go:284] No container was found matching "etcd"
	I1213 11:45:15.954531  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:45:15.954610  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:45:15.979197  568526 cri.go:89] found id: ""
	I1213 11:45:15.979226  568526 logs.go:282] 0 containers: []
	W1213 11:45:15.979236  568526 logs.go:284] No container was found matching "coredns"
	I1213 11:45:15.979243  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:45:15.979315  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:45:16.008075  568526 cri.go:89] found id: ""
	I1213 11:45:16.008098  568526 logs.go:282] 0 containers: []
	W1213 11:45:16.008107  568526 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:45:16.008118  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:45:16.008193  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:45:16.034169  568526 cri.go:89] found id: ""
	I1213 11:45:16.034191  568526 logs.go:282] 0 containers: []
	W1213 11:45:16.034199  568526 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:45:16.034207  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:45:16.034265  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:45:16.058826  568526 cri.go:89] found id: ""
	I1213 11:45:16.058854  568526 logs.go:282] 0 containers: []
	W1213 11:45:16.058862  568526 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:45:16.058869  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:45:16.058928  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:45:16.083123  568526 cri.go:89] found id: ""
	I1213 11:45:16.083151  568526 logs.go:282] 0 containers: []
	W1213 11:45:16.083160  568526 logs.go:284] No container was found matching "kindnet"
	I1213 11:45:16.083171  568526 logs.go:123] Gathering logs for dmesg ...
	I1213 11:45:16.083184  568526 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:45:16.100676  568526 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:45:16.100707  568526 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:45:16.166022  568526 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:45:16.157896    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.158620    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.160237    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.160701    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.162431    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:45:16.157896    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.158620    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.160237    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.160701    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.162431    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:45:16.166047  568526 logs.go:123] Gathering logs for containerd ...
	I1213 11:45:16.166060  568526 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:45:16.207842  568526 logs.go:123] Gathering logs for container status ...
	I1213 11:45:16.207880  568526 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:45:16.235350  568526 logs.go:123] Gathering logs for kubelet ...
	I1213 11:45:16.235376  568526 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 11:45:16.294386  568526 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248006s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 11:45:16.294456  568526 out.go:285] * 
	W1213 11:45:16.294516  568526 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248006s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:45:16.294544  568526 out.go:285] * 
	W1213 11:45:16.296685  568526 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:45:16.302469  568526 out.go:203] 
	W1213 11:45:16.306292  568526 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248006s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:45:16.306365  568526 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 11:45:16.306395  568526 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 11:45:16.309964  568526 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 11:36:56 no-preload-333352 containerd[759]: time="2025-12-13T11:36:56.089357963Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:36:57 no-preload-333352 containerd[759]: time="2025-12-13T11:36:57.511622237Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 13 11:36:57 no-preload-333352 containerd[759]: time="2025-12-13T11:36:57.516065454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 13 11:36:57 no-preload-333352 containerd[759]: time="2025-12-13T11:36:57.531326305Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:36:57 no-preload-333352 containerd[759]: time="2025-12-13T11:36:57.531822769Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:36:58 no-preload-333352 containerd[759]: time="2025-12-13T11:36:58.968197722Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 13 11:36:58 no-preload-333352 containerd[759]: time="2025-12-13T11:36:58.971116854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 13 11:36:58 no-preload-333352 containerd[759]: time="2025-12-13T11:36:58.980545274Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:36:58 no-preload-333352 containerd[759]: time="2025-12-13T11:36:58.981362816Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:02 no-preload-333352 containerd[759]: time="2025-12-13T11:37:02.214365383Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 13 11:37:02 no-preload-333352 containerd[759]: time="2025-12-13T11:37:02.217628084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 13 11:37:02 no-preload-333352 containerd[759]: time="2025-12-13T11:37:02.241331613Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:02 no-preload-333352 containerd[759]: time="2025-12-13T11:37:02.242087346Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:03 no-preload-333352 containerd[759]: time="2025-12-13T11:37:03.217262130Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 13 11:37:03 no-preload-333352 containerd[759]: time="2025-12-13T11:37:03.220012055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 13 11:37:03 no-preload-333352 containerd[759]: time="2025-12-13T11:37:03.228672623Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:03 no-preload-333352 containerd[759]: time="2025-12-13T11:37:03.229475338Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:04 no-preload-333352 containerd[759]: time="2025-12-13T11:37:04.630418294Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 13 11:37:04 no-preload-333352 containerd[759]: time="2025-12-13T11:37:04.633143086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 13 11:37:04 no-preload-333352 containerd[759]: time="2025-12-13T11:37:04.641567747Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:04 no-preload-333352 containerd[759]: time="2025-12-13T11:37:04.642255121Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:04 no-preload-333352 containerd[759]: time="2025-12-13T11:37:04.996924296Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 13 11:37:05 no-preload-333352 containerd[759]: time="2025-12-13T11:37:05.004833973Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 13 11:37:05 no-preload-333352 containerd[759]: time="2025-12-13T11:37:05.013913352Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:05 no-preload-333352 containerd[759]: time="2025-12-13T11:37:05.014372006Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:45:20.519293    5811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:20.520081    5811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:20.521943    5811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:20.522260    5811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:20.523762    5811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +12.907693] overlayfs: idmapped layers are currently not supported
	[Dec13 09:25] overlayfs: idmapped layers are currently not supported
	[ +26.192425] overlayfs: idmapped layers are currently not supported
	[Dec13 09:26] overlayfs: idmapped layers are currently not supported
	[ +25.729788] overlayfs: idmapped layers are currently not supported
	[Dec13 09:27] overlayfs: idmapped layers are currently not supported
	[Dec13 09:28] overlayfs: idmapped layers are currently not supported
	[Dec13 09:31] overlayfs: idmapped layers are currently not supported
	[Dec13 09:32] overlayfs: idmapped layers are currently not supported
	[ +17.979093] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec13 09:43] overlayfs: idmapped layers are currently not supported
	[Dec13 09:45] overlayfs: idmapped layers are currently not supported
	[ +25.885102] overlayfs: idmapped layers are currently not supported
	[Dec13 09:46] overlayfs: idmapped layers are currently not supported
	[ +22.078149] overlayfs: idmapped layers are currently not supported
	[Dec13 09:47] overlayfs: idmapped layers are currently not supported
	[Dec13 09:48] overlayfs: idmapped layers are currently not supported
	[Dec13 09:49] overlayfs: idmapped layers are currently not supported
	[Dec13 09:51] overlayfs: idmapped layers are currently not supported
	[ +17.043564] overlayfs: idmapped layers are currently not supported
	[Dec13 09:52] overlayfs: idmapped layers are currently not supported
	[Dec13 09:53] overlayfs: idmapped layers are currently not supported
	[Dec13 10:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:19] hrtimer: interrupt took 21247146 ns
	
	
	==> kernel <==
	 11:45:20 up  4:27,  0 user,  load average: 0.97, 1.37, 1.87
	Linux no-preload-333352 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 11:45:17 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:45:17 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 13 11:45:17 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:45:17 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:45:18 no-preload-333352 kubelet[5573]: E1213 11:45:18.019974    5573 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:45:18 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:45:18 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:45:18 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 13 11:45:18 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:45:18 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:45:18 no-preload-333352 kubelet[5601]: E1213 11:45:18.751381    5601 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:45:18 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:45:18 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:45:19 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 13 11:45:19 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:45:19 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:45:19 no-preload-333352 kubelet[5705]: E1213 11:45:19.470549    5705 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:45:19 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:45:19 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:45:20 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 13 11:45:20 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:45:20 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:45:20 no-preload-333352 kubelet[5735]: E1213 11:45:20.258167    5735 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:45:20 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:45:20 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-333352 -n no-preload-333352
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-333352 -n no-preload-333352: exit status 6 (376.245455ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 11:45:21.000788  594892 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-333352" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-333352" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (3.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (85.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-333352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1213 11:45:22.657083  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/default-k8s-diff-port-191845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:45:32.898564  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/default-k8s-diff-port-191845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:45:35.910824  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/old-k8s-version-624185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:45:53.380048  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/default-k8s-diff-port-191845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:46:34.341503  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/default-k8s-diff-port-191845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-333352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m23.640636662s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-333352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-333352 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-333352 describe deploy/metrics-server -n kube-system: exit status 1 (55.754974ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-333352" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-333352 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-333352
helpers_test.go:244: (dbg) docker inspect no-preload-333352:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db",
	        "Created": "2025-12-13T11:36:44.52795509Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 568910,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:36:44.610473104Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/hostname",
	        "HostsPath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/hosts",
	        "LogPath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db-json.log",
	        "Name": "/no-preload-333352",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-333352:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-333352",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db",
	                "LowerDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891-init/diff:/var/lib/docker/overlay2/5d0ca86320f2c06765995d644a73af30cd6ddb36f5c1a2a6b1ebb24af53cdabd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-333352",
	                "Source": "/var/lib/docker/volumes/no-preload-333352/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-333352",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-333352",
	                "name.minikube.sigs.k8s.io": "no-preload-333352",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "661b691bd6512fd4efbd202820a9bae1c5beb21cce06578707e71b64c02a0d52",
	            "SandboxKey": "/var/run/docker/netns/661b691bd651",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33406"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33409"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33407"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33408"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-333352": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:23:72:9e:c3:20",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ee20fc50f482b31273047147a2f419c36704bb98933537d0ac5901a560402043",
	                    "EndpointID": "eaefa46f6237ec9d0c60ef1c735019996dda65756a613e136b17ca120c60027b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-333352",
	                        "ca124efb8aeb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-333352 -n no-preload-333352
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-333352 -n no-preload-333352: exit status 6 (441.257126ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 11:46:45.148595  596474 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-333352" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-333352 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p cert-expiration-086397 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                            │ cert-expiration-086397       │ jenkins │ v1.37.0 │ 13 Dec 25 11:36 UTC │ 13 Dec 25 11:36 UTC │
	│ delete  │ -p cert-expiration-086397                                                                                                                                                                                                                                  │ cert-expiration-086397       │ jenkins │ v1.37.0 │ 13 Dec 25 11:36 UTC │ 13 Dec 25 11:36 UTC │
	│ start   │ -p embed-certs-951675 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:36 UTC │ 13 Dec 25 11:37 UTC │
	│ addons  │ enable metrics-server -p embed-certs-951675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:38 UTC │
	│ stop    │ -p embed-certs-951675 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:38 UTC │
	│ addons  │ enable dashboard -p embed-certs-951675 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:38 UTC │
	│ start   │ -p embed-certs-951675 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:39 UTC │
	│ image   │ embed-certs-951675 image list --format=json                                                                                                                                                                                                                │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ pause   │ -p embed-certs-951675 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ unpause │ -p embed-certs-951675 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p embed-certs-951675                                                                                                                                                                                                                                      │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p embed-certs-951675                                                                                                                                                                                                                                      │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p disable-driver-mounts-823668                                                                                                                                                                                                                            │ disable-driver-mounts-823668 │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ start   │ -p default-k8s-diff-port-191845 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:40 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-191845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ stop    │ -p default-k8s-diff-port-191845 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-191845 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ start   │ -p default-k8s-diff-port-191845 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:41 UTC │
	│ image   │ default-k8s-diff-port-191845 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ pause   │ -p default-k8s-diff-port-191845 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ unpause │ -p default-k8s-diff-port-191845 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ delete  │ -p default-k8s-diff-port-191845                                                                                                                                                                                                                            │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ delete  │ -p default-k8s-diff-port-191845                                                                                                                                                                                                                            │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ start   │ -p newest-cni-796924 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-333352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:41:40
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:41:40.611522  589123 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:41:40.611651  589123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:41:40.611663  589123 out.go:374] Setting ErrFile to fd 2...
	I1213 11:41:40.611668  589123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:41:40.611912  589123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 11:41:40.612341  589123 out.go:368] Setting JSON to false
	I1213 11:41:40.613214  589123 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":15853,"bootTime":1765610247,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 11:41:40.613281  589123 start.go:143] virtualization:  
	I1213 11:41:40.617550  589123 out.go:179] * [newest-cni-796924] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:41:40.621011  589123 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:41:40.621109  589123 notify.go:221] Checking for updates...
	I1213 11:41:40.627428  589123 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:41:40.630647  589123 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:41:40.633653  589123 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 11:41:40.636743  589123 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:41:40.639875  589123 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:41:40.643470  589123 config.go:182] Loaded profile config "no-preload-333352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:41:40.643591  589123 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:41:40.680100  589123 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:41:40.680226  589123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:41:40.756616  589123 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:41:40.747142182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:41:40.756723  589123 docker.go:319] overlay module found
	I1213 11:41:40.759961  589123 out.go:179] * Using the docker driver based on user configuration
	I1213 11:41:40.762776  589123 start.go:309] selected driver: docker
	I1213 11:41:40.762800  589123 start.go:927] validating driver "docker" against <nil>
	I1213 11:41:40.762814  589123 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:41:40.763539  589123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:41:40.821660  589123 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:41:40.812604764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:41:40.821819  589123 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 11:41:40.821853  589123 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 11:41:40.822076  589123 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 11:41:40.824951  589123 out.go:179] * Using Docker driver with root privileges
	I1213 11:41:40.827804  589123 cni.go:84] Creating CNI manager for ""
	I1213 11:41:40.827876  589123 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:41:40.827892  589123 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 11:41:40.827980  589123 start.go:353] cluster config:
	{Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:41:40.831095  589123 out.go:179] * Starting "newest-cni-796924" primary control-plane node in "newest-cni-796924" cluster
	I1213 11:41:40.833926  589123 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 11:41:40.836836  589123 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:41:40.839602  589123 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:41:40.839653  589123 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 11:41:40.839668  589123 cache.go:65] Caching tarball of preloaded images
	I1213 11:41:40.839677  589123 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:41:40.839751  589123 preload.go:238] Found /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 11:41:40.839761  589123 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 11:41:40.839868  589123 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/config.json ...
	I1213 11:41:40.839885  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/config.json: {Name:mk0ce282ac2d53ca7f0abb05f9aee384330b83fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:40.859227  589123 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:41:40.859251  589123 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:41:40.859271  589123 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:41:40.859304  589123 start.go:360] acquireMachinesLock for newest-cni-796924: {Name:mkb23dc851632c47983afd0f3cb215d071a4c6d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:41:40.859430  589123 start.go:364] duration metric: took 105.773µs to acquireMachinesLock for "newest-cni-796924"
	I1213 11:41:40.859462  589123 start.go:93] Provisioning new machine with config: &{Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 11:41:40.859540  589123 start.go:125] createHost starting for "" (driver="docker")
	I1213 11:41:40.862994  589123 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 11:41:40.863248  589123 start.go:159] libmachine.API.Create for "newest-cni-796924" (driver="docker")
	I1213 11:41:40.863284  589123 client.go:173] LocalClient.Create starting
	I1213 11:41:40.863374  589123 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem
	I1213 11:41:40.863413  589123 main.go:143] libmachine: Decoding PEM data...
	I1213 11:41:40.863433  589123 main.go:143] libmachine: Parsing certificate...
	I1213 11:41:40.863487  589123 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem
	I1213 11:41:40.863508  589123 main.go:143] libmachine: Decoding PEM data...
	I1213 11:41:40.863527  589123 main.go:143] libmachine: Parsing certificate...
	I1213 11:41:40.863921  589123 cli_runner.go:164] Run: docker network inspect newest-cni-796924 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 11:41:40.879900  589123 cli_runner.go:211] docker network inspect newest-cni-796924 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 11:41:40.879981  589123 network_create.go:284] running [docker network inspect newest-cni-796924] to gather additional debugging logs...
	I1213 11:41:40.880002  589123 cli_runner.go:164] Run: docker network inspect newest-cni-796924
	W1213 11:41:40.894997  589123 cli_runner.go:211] docker network inspect newest-cni-796924 returned with exit code 1
	I1213 11:41:40.895050  589123 network_create.go:287] error running [docker network inspect newest-cni-796924]: docker network inspect newest-cni-796924: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-796924 not found
	I1213 11:41:40.895065  589123 network_create.go:289] output of [docker network inspect newest-cni-796924]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-796924 not found
	
	** /stderr **
	I1213 11:41:40.895186  589123 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:41:40.912767  589123 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-381e4ce3c9ab IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:2d:23:57:0e:cc} reservation:<nil>}
	I1213 11:41:40.913250  589123 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bd1082d121b0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7a:42:ce:41:ea:ae} reservation:<nil>}
	I1213 11:41:40.913761  589123 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ebeb7162e340 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:cf:aa:41:ac:19} reservation:<nil>}
	I1213 11:41:40.914391  589123 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019db560}
	I1213 11:41:40.914425  589123 network_create.go:124] attempt to create docker network newest-cni-796924 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 11:41:40.914493  589123 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-796924 newest-cni-796924
	I1213 11:41:40.975597  589123 network_create.go:108] docker network newest-cni-796924 192.168.76.0/24 created
	I1213 11:41:40.975632  589123 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-796924" container
	I1213 11:41:40.975710  589123 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 11:41:40.992098  589123 cli_runner.go:164] Run: docker volume create newest-cni-796924 --label name.minikube.sigs.k8s.io=newest-cni-796924 --label created_by.minikube.sigs.k8s.io=true
	I1213 11:41:41.011676  589123 oci.go:103] Successfully created a docker volume newest-cni-796924
	I1213 11:41:41.011779  589123 cli_runner.go:164] Run: docker run --rm --name newest-cni-796924-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-796924 --entrypoint /usr/bin/test -v newest-cni-796924:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 11:41:41.562335  589123 oci.go:107] Successfully prepared a docker volume newest-cni-796924
	I1213 11:41:41.562406  589123 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:41:41.562420  589123 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 11:41:41.562520  589123 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-796924:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 11:41:45.483539  589123 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-796924:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.920978562s)
	I1213 11:41:45.483577  589123 kic.go:203] duration metric: took 3.921153184s to extract preloaded images to volume ...
	W1213 11:41:45.483725  589123 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 11:41:45.483849  589123 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 11:41:45.544786  589123 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-796924 --name newest-cni-796924 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-796924 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-796924 --network newest-cni-796924 --ip 192.168.76.2 --volume newest-cni-796924:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 11:41:45.837300  589123 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Running}}
	I1213 11:41:45.859478  589123 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:41:45.883282  589123 cli_runner.go:164] Run: docker exec newest-cni-796924 stat /var/lib/dpkg/alternatives/iptables
	I1213 11:41:45.939942  589123 oci.go:144] the created container "newest-cni-796924" has a running status.
	I1213 11:41:45.939979  589123 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa...
	I1213 11:41:46.475943  589123 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 11:41:46.497112  589123 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:41:46.514893  589123 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 11:41:46.514914  589123 kic_runner.go:114] Args: [docker exec --privileged newest-cni-796924 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 11:41:46.555872  589123 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:41:46.582882  589123 machine.go:94] provisionDockerMachine start ...
	I1213 11:41:46.583000  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:46.601258  589123 main.go:143] libmachine: Using SSH client type: native
	I1213 11:41:46.601613  589123 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1213 11:41:46.601628  589123 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:41:46.602167  589123 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43966->127.0.0.1:33430: read: connection reset by peer
	I1213 11:41:49.762525  589123 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-796924
	
	I1213 11:41:49.762610  589123 ubuntu.go:182] provisioning hostname "newest-cni-796924"
	I1213 11:41:49.762751  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:49.782841  589123 main.go:143] libmachine: Using SSH client type: native
	I1213 11:41:49.783298  589123 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1213 11:41:49.783328  589123 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-796924 && echo "newest-cni-796924" | sudo tee /etc/hostname
	I1213 11:41:49.948352  589123 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-796924
	
	I1213 11:41:49.948435  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:49.965977  589123 main.go:143] libmachine: Using SSH client type: native
	I1213 11:41:49.966316  589123 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I1213 11:41:49.966341  589123 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-796924' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-796924/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-796924' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:41:50.147128  589123 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:41:50.147171  589123 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 11:41:50.147219  589123 ubuntu.go:190] setting up certificates
	I1213 11:41:50.147230  589123 provision.go:84] configureAuth start
	I1213 11:41:50.147297  589123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:41:50.165702  589123 provision.go:143] copyHostCerts
	I1213 11:41:50.165784  589123 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 11:41:50.165802  589123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 11:41:50.165914  589123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 11:41:50.166068  589123 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 11:41:50.166080  589123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 11:41:50.166123  589123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 11:41:50.166210  589123 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 11:41:50.166226  589123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 11:41:50.166257  589123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 11:41:50.166335  589123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.newest-cni-796924 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-796924]
	I1213 11:41:50.575993  589123 provision.go:177] copyRemoteCerts
	I1213 11:41:50.576089  589123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:41:50.576156  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:50.593521  589123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:41:50.702596  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 11:41:50.720289  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:41:50.738001  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 11:41:50.755303  589123 provision.go:87] duration metric: took 608.049982ms to configureAuth
	I1213 11:41:50.755333  589123 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:41:50.755533  589123 config.go:182] Loaded profile config "newest-cni-796924": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:41:50.755547  589123 machine.go:97] duration metric: took 4.172642608s to provisionDockerMachine
	I1213 11:41:50.755555  589123 client.go:176] duration metric: took 9.892260099s to LocalClient.Create
	I1213 11:41:50.755575  589123 start.go:167] duration metric: took 9.892327365s to libmachine.API.Create "newest-cni-796924"
	I1213 11:41:50.755586  589123 start.go:293] postStartSetup for "newest-cni-796924" (driver="docker")
	I1213 11:41:50.755596  589123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:41:50.755647  589123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:41:50.755689  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:50.772594  589123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:41:50.878962  589123 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:41:50.882465  589123 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:41:50.882496  589123 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:41:50.882513  589123 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 11:41:50.882569  589123 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 11:41:50.882649  589123 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 11:41:50.882784  589123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:41:50.890136  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:41:50.909142  589123 start.go:296] duration metric: took 153.541145ms for postStartSetup
	I1213 11:41:50.909520  589123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:41:50.926272  589123 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/config.json ...
	I1213 11:41:50.926557  589123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:41:50.926615  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:50.943196  589123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:41:51.043825  589123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:41:51.048871  589123 start.go:128] duration metric: took 10.189316484s to createHost
	I1213 11:41:51.048902  589123 start.go:83] releasing machines lock for "newest-cni-796924", held for 10.189458492s
	I1213 11:41:51.048990  589123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:41:51.066001  589123 ssh_runner.go:195] Run: cat /version.json
	I1213 11:41:51.066070  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:51.066359  589123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:41:51.066428  589123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:41:51.089473  589123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:41:51.096259  589123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:41:51.199327  589123 ssh_runner.go:195] Run: systemctl --version
	I1213 11:41:51.296961  589123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:41:51.301350  589123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:41:51.301424  589123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:41:51.333583  589123 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 11:41:51.333658  589123 start.go:496] detecting cgroup driver to use...
	I1213 11:41:51.333709  589123 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:41:51.333790  589123 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:41:51.348970  589123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:41:51.362099  589123 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:41:51.362222  589123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:41:51.379694  589123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:41:51.398786  589123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:41:51.510657  589123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:41:51.629156  589123 docker.go:234] disabling docker service ...
	I1213 11:41:51.629223  589123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:41:51.650731  589123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:41:51.664169  589123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:41:51.793148  589123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:41:51.904796  589123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:41:51.919458  589123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:41:51.942455  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 11:41:51.956281  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:41:51.965941  589123 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:41:51.966013  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:41:51.977493  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:41:51.987404  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:41:52.000948  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:41:52.013279  589123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:41:52.023039  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:41:52.032853  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:41:52.042519  589123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:41:52.052346  589123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:41:52.060125  589123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:41:52.068281  589123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:41:52.179247  589123 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:41:52.320321  589123 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 11:41:52.320429  589123 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 11:41:52.324439  589123 start.go:564] Will wait 60s for crictl version
	I1213 11:41:52.324501  589123 ssh_runner.go:195] Run: which crictl
	I1213 11:41:52.328708  589123 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:41:52.357589  589123 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 11:41:52.357683  589123 ssh_runner.go:195] Run: containerd --version
	I1213 11:41:52.383274  589123 ssh_runner.go:195] Run: containerd --version
	I1213 11:41:52.413360  589123 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 11:41:52.416432  589123 cli_runner.go:164] Run: docker network inspect newest-cni-796924 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:41:52.432557  589123 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 11:41:52.436286  589123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:41:52.449106  589123 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 11:41:52.452071  589123 kubeadm.go:884] updating cluster {Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:41:52.452217  589123 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:41:52.452308  589123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:41:52.477318  589123 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 11:41:52.477343  589123 containerd.go:534] Images already preloaded, skipping extraction
	I1213 11:41:52.477404  589123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:41:52.505926  589123 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 11:41:52.505953  589123 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:41:52.505961  589123 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 11:41:52.506065  589123 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-796924 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:41:52.506135  589123 ssh_runner.go:195] Run: sudo crictl info
	I1213 11:41:52.531708  589123 cni.go:84] Creating CNI manager for ""
	I1213 11:41:52.531733  589123 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:41:52.531753  589123 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 11:41:52.531776  589123 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-796924 NodeName:newest-cni-796924 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:41:52.531907  589123 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-796924"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:41:52.531983  589123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:41:52.540473  589123 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:41:52.540571  589123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:41:52.548635  589123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 11:41:52.562445  589123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:41:52.579341  589123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 11:41:52.593144  589123 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:41:52.596805  589123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:41:52.607006  589123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:41:52.727771  589123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:41:52.751356  589123 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924 for IP: 192.168.76.2
	I1213 11:41:52.751381  589123 certs.go:195] generating shared ca certs ...
	I1213 11:41:52.751399  589123 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:52.751547  589123 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 11:41:52.751597  589123 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 11:41:52.751607  589123 certs.go:257] generating profile certs ...
	I1213 11:41:52.751662  589123 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.key
	I1213 11:41:52.751679  589123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.crt with IP's: []
	I1213 11:41:53.086363  589123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.crt ...
	I1213 11:41:53.086398  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.crt: {Name:mk66b963bdd54f4b935fe2fc7acd97dde553339b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.086603  589123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.key ...
	I1213 11:41:53.086620  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.key: {Name:mk98638456845d9072484c2ea9cf4343d6af1634 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.086739  589123 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key.ced45374
	I1213 11:41:53.086760  589123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt.ced45374 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1213 11:41:53.240504  589123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt.ced45374 ...
	I1213 11:41:53.240537  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt.ced45374: {Name:mkabd19bc7e960d2c555d82ddd752e663c8f6cd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.240708  589123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key.ced45374 ...
	I1213 11:41:53.240722  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key.ced45374: {Name:mk3e4fdd1c06bfd329cc4a39da890d8da6317b83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.240816  589123 certs.go:382] copying /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt.ced45374 -> /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt
	I1213 11:41:53.240898  589123 certs.go:386] copying /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key.ced45374 -> /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key
	I1213 11:41:53.240954  589123 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key
	I1213 11:41:53.240973  589123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.crt with IP's: []
	I1213 11:41:53.471880  589123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.crt ...
	I1213 11:41:53.471916  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.crt: {Name:mkc7d686a714b0dc00954cf052cbfbc601a1b715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.472127  589123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key ...
	I1213 11:41:53.472146  589123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key: {Name:mk3b8f645a4e8504ec9bd2eed45071861029af54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:41:53.472358  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 11:41:53.472406  589123 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 11:41:53.472425  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:41:53.472456  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:41:53.472484  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:41:53.472514  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 11:41:53.472565  589123 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:41:53.473197  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:41:53.497619  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:41:53.520852  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:41:53.540142  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:41:53.558660  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 11:41:53.576992  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:41:53.595267  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:41:53.613794  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:41:53.632148  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 11:41:53.650879  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:41:53.676104  589123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 11:41:53.697746  589123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:41:53.714070  589123 ssh_runner.go:195] Run: openssl version
	I1213 11:41:53.722093  589123 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 11:41:53.730578  589123 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 11:41:53.738421  589123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 11:41:53.742437  589123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 11:41:53.742533  589123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 11:41:53.786290  589123 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:41:53.794147  589123 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3089152.pem /etc/ssl/certs/3ec20f2e.0
	I1213 11:41:53.802168  589123 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:41:53.809707  589123 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:41:53.817606  589123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:41:53.821659  589123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:41:53.821728  589123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:41:53.863050  589123 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:41:53.870933  589123 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 11:41:53.878551  589123 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 11:41:53.886061  589123 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 11:41:53.893998  589123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 11:41:53.898172  589123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 11:41:53.898239  589123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 11:41:53.939253  589123 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:41:53.946895  589123 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/308915.pem /etc/ssl/certs/51391683.0
	I1213 11:41:53.954761  589123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:41:53.958396  589123 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 11:41:53.958494  589123 kubeadm.go:401] StartCluster: {Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:41:53.958606  589123 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 11:41:53.958783  589123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:41:53.989529  589123 cri.go:89] found id: ""
	I1213 11:41:53.989604  589123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:41:53.998028  589123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:41:54.008661  589123 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:41:54.008741  589123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:41:54.018340  589123 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:41:54.018369  589123 kubeadm.go:158] found existing configuration files:
	
	I1213 11:41:54.018431  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:41:54.027307  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:41:54.027393  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:41:54.036120  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:41:54.044655  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:41:54.044733  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:41:54.053302  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:41:54.061899  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:41:54.061991  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:41:54.070981  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:41:54.079553  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:41:54.079622  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:41:54.087398  589123 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:41:54.133199  589123 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:41:54.133519  589123 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:41:54.230705  589123 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:41:54.230782  589123 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:41:54.230824  589123 kubeadm.go:319] OS: Linux
	I1213 11:41:54.230875  589123 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:41:54.230929  589123 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:41:54.230979  589123 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:41:54.231032  589123 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:41:54.231083  589123 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:41:54.231135  589123 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:41:54.231184  589123 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:41:54.231236  589123 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:41:54.231285  589123 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:41:54.298600  589123 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:41:54.298731  589123 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:41:54.298837  589123 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:41:54.307104  589123 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:41:54.313608  589123 out.go:252]   - Generating certificates and keys ...
	I1213 11:41:54.313778  589123 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:41:54.313890  589123 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:41:54.510481  589123 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 11:41:54.575310  589123 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 11:41:54.686709  589123 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 11:41:54.914237  589123 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 11:41:55.329374  589123 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 11:41:55.329538  589123 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-796924] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 11:41:55.443297  589123 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 11:41:55.443660  589123 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-796924] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 11:41:55.929252  589123 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 11:41:56.099892  589123 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 11:41:56.662486  589123 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 11:41:56.662923  589123 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:41:56.728098  589123 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:41:56.987601  589123 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:41:57.419088  589123 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:41:57.640413  589123 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:41:58.149864  589123 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:41:58.150638  589123 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:41:58.153451  589123 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:41:58.157369  589123 out.go:252]   - Booting up control plane ...
	I1213 11:41:58.157498  589123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:41:58.157584  589123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:41:58.157650  589123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:41:58.194714  589123 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:41:58.194860  589123 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:41:58.202073  589123 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:41:58.202506  589123 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:41:58.202564  589123 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:41:58.355362  589123 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:41:58.355487  589123 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:45:15.897209  568526 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000248006s
	I1213 11:45:15.897238  568526 kubeadm.go:319] 
	I1213 11:45:15.897296  568526 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:45:15.897329  568526 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:45:15.897444  568526 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:45:15.897460  568526 kubeadm.go:319] 
	I1213 11:45:15.897565  568526 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:45:15.897602  568526 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:45:15.897639  568526 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:45:15.897647  568526 kubeadm.go:319] 
	I1213 11:45:15.901779  568526 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:45:15.902208  568526 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:45:15.902322  568526 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:45:15.902560  568526 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:45:15.902570  568526 kubeadm.go:319] 
	I1213 11:45:15.902639  568526 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 11:45:15.902721  568526 kubeadm.go:403] duration metric: took 8m6.858200115s to StartCluster
	I1213 11:45:15.902773  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:45:15.902832  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:45:15.928207  568526 cri.go:89] found id: ""
	I1213 11:45:15.928245  568526 logs.go:282] 0 containers: []
	W1213 11:45:15.928254  568526 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:45:15.928262  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:45:15.928320  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:45:15.954471  568526 cri.go:89] found id: ""
	I1213 11:45:15.954508  568526 logs.go:282] 0 containers: []
	W1213 11:45:15.954520  568526 logs.go:284] No container was found matching "etcd"
	I1213 11:45:15.954531  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:45:15.954610  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:45:15.979197  568526 cri.go:89] found id: ""
	I1213 11:45:15.979226  568526 logs.go:282] 0 containers: []
	W1213 11:45:15.979236  568526 logs.go:284] No container was found matching "coredns"
	I1213 11:45:15.979243  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:45:15.979315  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:45:16.008075  568526 cri.go:89] found id: ""
	I1213 11:45:16.008098  568526 logs.go:282] 0 containers: []
	W1213 11:45:16.008107  568526 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:45:16.008118  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:45:16.008193  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:45:16.034169  568526 cri.go:89] found id: ""
	I1213 11:45:16.034191  568526 logs.go:282] 0 containers: []
	W1213 11:45:16.034199  568526 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:45:16.034207  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:45:16.034265  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:45:16.058826  568526 cri.go:89] found id: ""
	I1213 11:45:16.058854  568526 logs.go:282] 0 containers: []
	W1213 11:45:16.058862  568526 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:45:16.058869  568526 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:45:16.058928  568526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:45:16.083123  568526 cri.go:89] found id: ""
	I1213 11:45:16.083151  568526 logs.go:282] 0 containers: []
	W1213 11:45:16.083160  568526 logs.go:284] No container was found matching "kindnet"
	I1213 11:45:16.083171  568526 logs.go:123] Gathering logs for dmesg ...
	I1213 11:45:16.083184  568526 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:45:16.100676  568526 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:45:16.100707  568526 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:45:16.166022  568526 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:45:16.157896    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.158620    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.160237    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.160701    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.162431    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:45:16.157896    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.158620    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.160237    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.160701    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:45:16.162431    5421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:45:16.166047  568526 logs.go:123] Gathering logs for containerd ...
	I1213 11:45:16.166060  568526 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:45:16.207842  568526 logs.go:123] Gathering logs for container status ...
	I1213 11:45:16.207880  568526 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:45:16.235350  568526 logs.go:123] Gathering logs for kubelet ...
	I1213 11:45:16.235376  568526 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 11:45:16.294386  568526 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248006s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 11:45:16.294456  568526 out.go:285] * 
	W1213 11:45:16.294516  568526 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248006s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:45:16.294544  568526 out.go:285] * 
	W1213 11:45:16.296685  568526 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:45:16.302469  568526 out.go:203] 
	W1213 11:45:16.306292  568526 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248006s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:45:16.306365  568526 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 11:45:16.306395  568526 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 11:45:16.309964  568526 out.go:203] 
	I1213 11:45:58.353148  589123 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000052585s
	I1213 11:45:58.353188  589123 kubeadm.go:319] 
	I1213 11:45:58.353442  589123 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:45:58.353506  589123 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:45:58.353695  589123 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:45:58.353705  589123 kubeadm.go:319] 
	I1213 11:45:58.354139  589123 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:45:58.354199  589123 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:45:58.354254  589123 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:45:58.354259  589123 kubeadm.go:319] 
	I1213 11:45:58.358851  589123 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:45:58.359414  589123 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:45:58.359545  589123 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:45:58.359830  589123 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 11:45:58.359839  589123 kubeadm.go:319] 
	I1213 11:45:58.359915  589123 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 11:45:58.360054  589123 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-796924] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-796924] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000052585s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 11:45:58.360154  589123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 11:45:58.767471  589123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:45:58.780638  589123 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:45:58.780702  589123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:45:58.788623  589123 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:45:58.788646  589123 kubeadm.go:158] found existing configuration files:
	
	I1213 11:45:58.788724  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:45:58.796630  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:45:58.796706  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:45:58.804119  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:45:58.811956  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:45:58.812020  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:45:58.819661  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:45:58.827110  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:45:58.827171  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:45:58.834525  589123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:45:58.842305  589123 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:45:58.842374  589123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:45:58.849891  589123 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:45:58.890505  589123 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:45:58.890564  589123 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:45:58.955820  589123 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:45:58.955899  589123 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:45:58.955941  589123 kubeadm.go:319] OS: Linux
	I1213 11:45:58.955989  589123 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:45:58.956040  589123 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:45:58.956091  589123 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:45:58.956143  589123 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:45:58.956193  589123 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:45:58.956250  589123 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:45:58.956298  589123 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:45:58.956350  589123 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:45:58.956399  589123 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:45:59.029638  589123 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:45:59.029754  589123 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:45:59.029851  589123 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:45:59.039109  589123 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:45:59.042552  589123 out.go:252]   - Generating certificates and keys ...
	I1213 11:45:59.042723  589123 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:45:59.042824  589123 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:45:59.042943  589123 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 11:45:59.043039  589123 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 11:45:59.043207  589123 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 11:45:59.043289  589123 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 11:45:59.043376  589123 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 11:45:59.043461  589123 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 11:45:59.043567  589123 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 11:45:59.043667  589123 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 11:45:59.043734  589123 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 11:45:59.043819  589123 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:45:59.264981  589123 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:45:59.845721  589123 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:46:00.029919  589123 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:46:00.271744  589123 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:46:00.538849  589123 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:46:00.539679  589123 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:46:00.542509  589123 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:46:00.546068  589123 out.go:252]   - Booting up control plane ...
	I1213 11:46:00.546182  589123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:46:00.546263  589123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:46:00.546330  589123 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:46:00.568499  589123 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:46:00.568665  589123 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:46:00.575924  589123 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:46:00.576291  589123 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:46:00.576363  589123 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:46:00.707953  589123 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:46:00.708079  589123 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 11:36:56 no-preload-333352 containerd[759]: time="2025-12-13T11:36:56.089357963Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:36:57 no-preload-333352 containerd[759]: time="2025-12-13T11:36:57.511622237Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 13 11:36:57 no-preload-333352 containerd[759]: time="2025-12-13T11:36:57.516065454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 13 11:36:57 no-preload-333352 containerd[759]: time="2025-12-13T11:36:57.531326305Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:36:57 no-preload-333352 containerd[759]: time="2025-12-13T11:36:57.531822769Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:36:58 no-preload-333352 containerd[759]: time="2025-12-13T11:36:58.968197722Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 13 11:36:58 no-preload-333352 containerd[759]: time="2025-12-13T11:36:58.971116854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 13 11:36:58 no-preload-333352 containerd[759]: time="2025-12-13T11:36:58.980545274Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:36:58 no-preload-333352 containerd[759]: time="2025-12-13T11:36:58.981362816Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:02 no-preload-333352 containerd[759]: time="2025-12-13T11:37:02.214365383Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 13 11:37:02 no-preload-333352 containerd[759]: time="2025-12-13T11:37:02.217628084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 13 11:37:02 no-preload-333352 containerd[759]: time="2025-12-13T11:37:02.241331613Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:02 no-preload-333352 containerd[759]: time="2025-12-13T11:37:02.242087346Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:03 no-preload-333352 containerd[759]: time="2025-12-13T11:37:03.217262130Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 13 11:37:03 no-preload-333352 containerd[759]: time="2025-12-13T11:37:03.220012055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 13 11:37:03 no-preload-333352 containerd[759]: time="2025-12-13T11:37:03.228672623Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:03 no-preload-333352 containerd[759]: time="2025-12-13T11:37:03.229475338Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:04 no-preload-333352 containerd[759]: time="2025-12-13T11:37:04.630418294Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 13 11:37:04 no-preload-333352 containerd[759]: time="2025-12-13T11:37:04.633143086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 13 11:37:04 no-preload-333352 containerd[759]: time="2025-12-13T11:37:04.641567747Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:04 no-preload-333352 containerd[759]: time="2025-12-13T11:37:04.642255121Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:04 no-preload-333352 containerd[759]: time="2025-12-13T11:37:04.996924296Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 13 11:37:05 no-preload-333352 containerd[759]: time="2025-12-13T11:37:05.004833973Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 13 11:37:05 no-preload-333352 containerd[759]: time="2025-12-13T11:37:05.013913352Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 11:37:05 no-preload-333352 containerd[759]: time="2025-12-13T11:37:05.014372006Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:46:45.866084    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:46:45.866679    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:46:45.868303    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:46:45.868889    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:46:45.870250    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +12.907693] overlayfs: idmapped layers are currently not supported
	[Dec13 09:25] overlayfs: idmapped layers are currently not supported
	[ +26.192425] overlayfs: idmapped layers are currently not supported
	[Dec13 09:26] overlayfs: idmapped layers are currently not supported
	[ +25.729788] overlayfs: idmapped layers are currently not supported
	[Dec13 09:27] overlayfs: idmapped layers are currently not supported
	[Dec13 09:28] overlayfs: idmapped layers are currently not supported
	[Dec13 09:31] overlayfs: idmapped layers are currently not supported
	[Dec13 09:32] overlayfs: idmapped layers are currently not supported
	[ +17.979093] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec13 09:43] overlayfs: idmapped layers are currently not supported
	[Dec13 09:45] overlayfs: idmapped layers are currently not supported
	[ +25.885102] overlayfs: idmapped layers are currently not supported
	[Dec13 09:46] overlayfs: idmapped layers are currently not supported
	[ +22.078149] overlayfs: idmapped layers are currently not supported
	[Dec13 09:47] overlayfs: idmapped layers are currently not supported
	[Dec13 09:48] overlayfs: idmapped layers are currently not supported
	[Dec13 09:49] overlayfs: idmapped layers are currently not supported
	[Dec13 09:51] overlayfs: idmapped layers are currently not supported
	[ +17.043564] overlayfs: idmapped layers are currently not supported
	[Dec13 09:52] overlayfs: idmapped layers are currently not supported
	[Dec13 09:53] overlayfs: idmapped layers are currently not supported
	[Dec13 10:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:19] hrtimer: interrupt took 21247146 ns
	
	
	==> kernel <==
	 11:46:45 up  4:29,  0 user,  load average: 0.53, 1.10, 1.73
	Linux no-preload-333352 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 11:46:42 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:46:43 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 436.
	Dec 13 11:46:43 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:46:43 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:46:43 no-preload-333352 kubelet[6583]: E1213 11:46:43.468412    6583 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:46:43 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:46:43 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:46:44 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 437.
	Dec 13 11:46:44 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:46:44 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:46:44 no-preload-333352 kubelet[6589]: E1213 11:46:44.229765    6589 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:46:44 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:46:44 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:46:44 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 438.
	Dec 13 11:46:44 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:46:44 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:46:44 no-preload-333352 kubelet[6607]: E1213 11:46:44.986149    6607 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:46:44 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:46:44 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:46:45 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 439.
	Dec 13 11:46:45 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:46:45 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:46:45 no-preload-333352 kubelet[6673]: E1213 11:46:45.780533    6673 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:46:45 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:46:45 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-333352 -n no-preload-333352
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-333352 -n no-preload-333352: exit status 6 (331.260989ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 11:46:46.312368  596706 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-333352" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-333352" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (85.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (369.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-333352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1213 11:46:48.082309  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:47:56.262921  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/default-k8s-diff-port-191845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:49:47.365893  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:49:51.165162  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-333352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 80 (6m8.316762817s)

                                                
                                                
-- stdout --
	* [no-preload-333352] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "no-preload-333352" primary control-plane node in "no-preload-333352" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:46:47.931970  596998 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:46:47.932200  596998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:46:47.932224  596998 out.go:374] Setting ErrFile to fd 2...
	I1213 11:46:47.932243  596998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:46:47.932512  596998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 11:46:47.932893  596998 out.go:368] Setting JSON to false
	I1213 11:46:47.933847  596998 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":16161,"bootTime":1765610247,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 11:46:47.933941  596998 start.go:143] virtualization:  
	I1213 11:46:47.936853  596998 out.go:179] * [no-preload-333352] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:46:47.940791  596998 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:46:47.940972  596998 notify.go:221] Checking for updates...
	I1213 11:46:47.944715  596998 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:46:47.948724  596998 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:46:47.952032  596998 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 11:46:47.955654  596998 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:46:47.958860  596998 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:46:47.962467  596998 config.go:182] Loaded profile config "no-preload-333352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:46:47.963200  596998 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:46:47.998152  596998 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:46:47.998290  596998 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:46:48.062434  596998 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:46:48.052493365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:46:48.062546  596998 docker.go:319] overlay module found
	I1213 11:46:48.065709  596998 out.go:179] * Using the docker driver based on existing profile
	I1213 11:46:48.068574  596998 start.go:309] selected driver: docker
	I1213 11:46:48.068598  596998 start.go:927] validating driver "docker" against &{Name:no-preload-333352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-333352 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:46:48.068700  596998 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:46:48.069441  596998 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:46:48.125553  596998 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:46:48.115368398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:46:48.125930  596998 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:46:48.125957  596998 cni.go:84] Creating CNI manager for ""
	I1213 11:46:48.126004  596998 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:46:48.126038  596998 start.go:353] cluster config:
	{Name:no-preload-333352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-333352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:46:48.129462  596998 out.go:179] * Starting "no-preload-333352" primary control-plane node in "no-preload-333352" cluster
	I1213 11:46:48.132280  596998 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 11:46:48.135249  596998 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:46:48.138117  596998 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:46:48.138171  596998 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:46:48.138307  596998 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/config.json ...
	I1213 11:46:48.138613  596998 cache.go:107] acquiring lock: {Name:mk31a59cdc41332147a99da115e762325d4c0338 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.138751  596998 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1213 11:46:48.138763  596998 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 161.618µs
	I1213 11:46:48.138777  596998 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1213 11:46:48.138765  596998 cache.go:107] acquiring lock: {Name:mk2ae32cc20ed4877d34af62f362936effddd88e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.138790  596998 cache.go:107] acquiring lock: {Name:mkc81502ef492ecd96689a43cd1ba75bb4269f1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.138812  596998 cache.go:107] acquiring lock: {Name:mk8c5f5248a840d1f1002cf2ef82275f7d10aa22 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.138842  596998 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1213 11:46:48.138848  596998 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 60.267µs
	I1213 11:46:48.138854  596998 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1213 11:46:48.138851  596998 cache.go:107] acquiring lock: {Name:mk35ccdf3fe56b66e694c71ff2d919f143d8dacc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.138862  596998 cache.go:107] acquiring lock: {Name:mk23fe723c287cca56429f89071149f1d96bb4dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.138892  596998 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1213 11:46:48.138901  596998 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 51.398µs
	I1213 11:46:48.138905  596998 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1213 11:46:48.138908  596998 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1213 11:46:48.138894  596998 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1213 11:46:48.138912  596998 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 101.006µs
	I1213 11:46:48.138918  596998 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1213 11:46:48.138918  596998 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 56.075µs
	I1213 11:46:48.138924  596998 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1213 11:46:48.138940  596998 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1213 11:46:48.138947  596998 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 191.059µs
	I1213 11:46:48.138952  596998 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1213 11:46:48.138934  596998 cache.go:107] acquiring lock: {Name:mkc6bf22ce18468a92a774694a4b49cbc277f1ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.138948  596998 cache.go:107] acquiring lock: {Name:mk26d49691f1ca365a0728b2ae008656f80369ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.138975  596998 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1213 11:46:48.138980  596998 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 47.803µs
	I1213 11:46:48.138985  596998 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1213 11:46:48.138986  596998 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1213 11:46:48.138992  596998 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 44.924µs
	I1213 11:46:48.138999  596998 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1213 11:46:48.139013  596998 cache.go:87] Successfully saved all images to host disk.
	I1213 11:46:48.157619  596998 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:46:48.157642  596998 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:46:48.157658  596998 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:46:48.157688  596998 start.go:360] acquireMachinesLock for no-preload-333352: {Name:mkcf6f110441e125d79b38a8f8cc1a9606a821b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.157750  596998 start.go:364] duration metric: took 36.333µs to acquireMachinesLock for "no-preload-333352"
	I1213 11:46:48.157773  596998 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:46:48.157778  596998 fix.go:54] fixHost starting: 
	I1213 11:46:48.158031  596998 cli_runner.go:164] Run: docker container inspect no-preload-333352 --format={{.State.Status}}
	I1213 11:46:48.175033  596998 fix.go:112] recreateIfNeeded on no-preload-333352: state=Stopped err=<nil>
	W1213 11:46:48.175073  596998 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:46:48.180317  596998 out.go:252] * Restarting existing docker container for "no-preload-333352" ...
	I1213 11:46:48.180439  596998 cli_runner.go:164] Run: docker start no-preload-333352
	I1213 11:46:48.429680  596998 cli_runner.go:164] Run: docker container inspect no-preload-333352 --format={{.State.Status}}
	I1213 11:46:48.453033  596998 kic.go:430] container "no-preload-333352" state is running.
	I1213 11:46:48.453454  596998 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-333352
	I1213 11:46:48.479808  596998 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/config.json ...
	I1213 11:46:48.480042  596998 machine.go:94] provisionDockerMachine start ...
	I1213 11:46:48.480102  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:48.503420  596998 main.go:143] libmachine: Using SSH client type: native
	I1213 11:46:48.503750  596998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1213 11:46:48.503759  596998 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:46:48.504579  596998 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 11:46:51.658471  596998 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-333352
	
	I1213 11:46:51.658499  596998 ubuntu.go:182] provisioning hostname "no-preload-333352"
	I1213 11:46:51.658568  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:51.680359  596998 main.go:143] libmachine: Using SSH client type: native
	I1213 11:46:51.680665  596998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1213 11:46:51.680681  596998 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-333352 && echo "no-preload-333352" | sudo tee /etc/hostname
	I1213 11:46:51.840345  596998 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-333352
	
	I1213 11:46:51.840432  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:51.858862  596998 main.go:143] libmachine: Using SSH client type: native
	I1213 11:46:51.859190  596998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1213 11:46:51.859212  596998 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-333352' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-333352/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-333352' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:46:52.011439  596998 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:46:52.011473  596998 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 11:46:52.011517  596998 ubuntu.go:190] setting up certificates
	I1213 11:46:52.011538  596998 provision.go:84] configureAuth start
	I1213 11:46:52.011606  596998 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-333352
	I1213 11:46:52.030543  596998 provision.go:143] copyHostCerts
	I1213 11:46:52.030630  596998 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 11:46:52.030645  596998 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 11:46:52.030900  596998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 11:46:52.031021  596998 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 11:46:52.031034  596998 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 11:46:52.031064  596998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 11:46:52.031134  596998 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 11:46:52.031144  596998 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 11:46:52.031169  596998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 11:46:52.031226  596998 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.no-preload-333352 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-333352]
	I1213 11:46:52.199052  596998 provision.go:177] copyRemoteCerts
	I1213 11:46:52.199122  596998 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:46:52.199163  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:52.218347  596998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:46:52.322404  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:46:52.340755  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 11:46:52.358393  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:46:52.375864  596998 provision.go:87] duration metric: took 364.299362ms to configureAuth
	I1213 11:46:52.375890  596998 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:46:52.376105  596998 config.go:182] Loaded profile config "no-preload-333352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:46:52.376112  596998 machine.go:97] duration metric: took 3.896062654s to provisionDockerMachine
	I1213 11:46:52.376121  596998 start.go:293] postStartSetup for "no-preload-333352" (driver="docker")
	I1213 11:46:52.376132  596998 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:46:52.376180  596998 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:46:52.376225  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:52.393759  596998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:46:52.503058  596998 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:46:52.506632  596998 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:46:52.506662  596998 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:46:52.506674  596998 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 11:46:52.506753  596998 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 11:46:52.506839  596998 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 11:46:52.506949  596998 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:46:52.514878  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:46:52.533605  596998 start.go:296] duration metric: took 157.452449ms for postStartSetup
	I1213 11:46:52.533696  596998 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:46:52.533746  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:52.551775  596998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:46:52.655971  596998 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:46:52.661022  596998 fix.go:56] duration metric: took 4.503236152s for fixHost
	I1213 11:46:52.661051  596998 start.go:83] releasing machines lock for "no-preload-333352", held for 4.503288469s
	I1213 11:46:52.661123  596998 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-333352
	I1213 11:46:52.678128  596998 ssh_runner.go:195] Run: cat /version.json
	I1213 11:46:52.678192  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:52.678486  596998 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:46:52.678544  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:52.698809  596998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:46:52.701663  596998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:46:52.895685  596998 ssh_runner.go:195] Run: systemctl --version
	I1213 11:46:52.902479  596998 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:46:52.907001  596998 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:46:52.907123  596998 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:46:52.915282  596998 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 11:46:52.915312  596998 start.go:496] detecting cgroup driver to use...
	I1213 11:46:52.915343  596998 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:46:52.915421  596998 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:46:52.933908  596998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:46:52.947931  596998 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:46:52.947999  596998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:46:52.963993  596998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:46:52.977424  596998 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:46:53.103160  596998 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:46:53.238170  596998 docker.go:234] disabling docker service ...
	I1213 11:46:53.238265  596998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:46:53.257118  596998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:46:53.272790  596998 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:46:53.410295  596998 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:46:53.530871  596998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:46:53.544130  596998 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:46:53.559695  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 11:46:53.568863  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:46:53.578325  596998 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:46:53.578399  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:46:53.588010  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:46:53.597447  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:46:53.606673  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:46:53.616093  596998 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:46:53.624546  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:46:53.633591  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:46:53.642957  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:46:53.652128  596998 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:46:53.659821  596998 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:46:53.667713  596998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:46:53.790713  596998 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:46:53.892894  596998 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 11:46:53.893007  596998 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 11:46:53.896921  596998 start.go:564] Will wait 60s for crictl version
	I1213 11:46:53.897007  596998 ssh_runner.go:195] Run: which crictl
	I1213 11:46:53.900594  596998 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:46:53.944666  596998 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 11:46:53.944790  596998 ssh_runner.go:195] Run: containerd --version
	I1213 11:46:53.967810  596998 ssh_runner.go:195] Run: containerd --version
	I1213 11:46:53.993455  596998 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 11:46:53.996395  596998 cli_runner.go:164] Run: docker network inspect no-preload-333352 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:46:54.023910  596998 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 11:46:54.028455  596998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:46:54.039026  596998 kubeadm.go:884] updating cluster {Name:no-preload-333352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-333352 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:46:54.039148  596998 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:46:54.039201  596998 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:46:54.065782  596998 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 11:46:54.065805  596998 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:46:54.065813  596998 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 11:46:54.065928  596998 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-333352 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-333352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:46:54.066000  596998 ssh_runner.go:195] Run: sudo crictl info
	I1213 11:46:54.093275  596998 cni.go:84] Creating CNI manager for ""
	I1213 11:46:54.093302  596998 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:46:54.093325  596998 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 11:46:54.093349  596998 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-333352 NodeName:no-preload-333352 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:46:54.093537  596998 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-333352"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:46:54.093645  596998 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:46:54.101713  596998 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:46:54.101784  596998 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:46:54.109422  596998 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 11:46:54.122555  596998 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:46:54.135656  596998 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 11:46:54.148334  596998 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:46:54.151958  596998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:46:54.162210  596998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:46:54.287595  596998 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:46:54.305395  596998 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352 for IP: 192.168.85.2
	I1213 11:46:54.305417  596998 certs.go:195] generating shared ca certs ...
	I1213 11:46:54.305434  596998 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:46:54.305583  596998 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 11:46:54.305641  596998 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 11:46:54.305653  596998 certs.go:257] generating profile certs ...
	I1213 11:46:54.305755  596998 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/client.key
	I1213 11:46:54.305817  596998 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/apiserver.key.cd574fc3
	I1213 11:46:54.305860  596998 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/proxy-client.key
	I1213 11:46:54.305974  596998 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 11:46:54.306019  596998 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 11:46:54.306031  596998 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:46:54.306061  596998 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:46:54.306090  596998 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:46:54.306117  596998 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 11:46:54.306193  596998 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:46:54.306893  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:46:54.331092  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:46:54.350803  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:46:54.368808  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:46:54.387679  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 11:46:54.404957  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:46:54.422566  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:46:54.440705  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:46:54.458444  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 11:46:54.476249  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 11:46:54.494025  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:46:54.512671  596998 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:46:54.526330  596998 ssh_runner.go:195] Run: openssl version
	I1213 11:46:54.532951  596998 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 11:46:54.540955  596998 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 11:46:54.548967  596998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 11:46:54.552993  596998 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 11:46:54.553060  596998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 11:46:54.596516  596998 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:46:54.604001  596998 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:46:54.611355  596998 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:46:54.618889  596998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:46:54.622912  596998 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:46:54.623031  596998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:46:54.665964  596998 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:46:54.674514  596998 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 11:46:54.683052  596998 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 11:46:54.691830  596998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 11:46:54.696558  596998 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 11:46:54.696685  596998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 11:46:54.739286  596998 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:46:54.747030  596998 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:46:54.751301  596998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:46:54.792521  596998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:46:54.848244  596998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:46:54.897199  596998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:46:54.938465  596998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:46:54.979853  596998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:46:55.021716  596998 kubeadm.go:401] StartCluster: {Name:no-preload-333352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-333352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:46:55.021819  596998 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 11:46:55.021905  596998 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:46:55.050987  596998 cri.go:89] found id: ""
	I1213 11:46:55.051064  596998 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:46:55.059300  596998 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 11:46:55.059321  596998 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 11:46:55.059393  596998 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 11:46:55.066981  596998 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:46:55.067384  596998 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-333352" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:46:55.067494  596998 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-307042/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-333352" cluster setting kubeconfig missing "no-preload-333352" context setting]
	I1213 11:46:55.067794  596998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:46:55.069069  596998 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 11:46:55.083063  596998 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 11:46:55.083096  596998 kubeadm.go:602] duration metric: took 23.769764ms to restartPrimaryControlPlane
	I1213 11:46:55.083110  596998 kubeadm.go:403] duration metric: took 61.40393ms to StartCluster
	I1213 11:46:55.083126  596998 settings.go:142] acquiring lock: {Name:mk079e9a25ebbc2c8fbae42d4c6ed096a652c00b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:46:55.083190  596998 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:46:55.083859  596998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:46:55.084085  596998 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 11:46:55.084484  596998 config.go:182] Loaded profile config "no-preload-333352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:46:55.084498  596998 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 11:46:55.084648  596998 addons.go:70] Setting storage-provisioner=true in profile "no-preload-333352"
	I1213 11:46:55.084663  596998 addons.go:239] Setting addon storage-provisioner=true in "no-preload-333352"
	I1213 11:46:55.084673  596998 addons.go:70] Setting dashboard=true in profile "no-preload-333352"
	I1213 11:46:55.084687  596998 addons.go:239] Setting addon dashboard=true in "no-preload-333352"
	W1213 11:46:55.084692  596998 addons.go:248] addon dashboard should already be in state true
	I1213 11:46:55.084699  596998 addons.go:70] Setting default-storageclass=true in profile "no-preload-333352"
	I1213 11:46:55.084713  596998 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-333352"
	I1213 11:46:55.084715  596998 host.go:66] Checking if "no-preload-333352" exists ...
	I1213 11:46:55.085024  596998 cli_runner.go:164] Run: docker container inspect no-preload-333352 --format={{.State.Status}}
	I1213 11:46:55.085259  596998 cli_runner.go:164] Run: docker container inspect no-preload-333352 --format={{.State.Status}}
	I1213 11:46:55.084693  596998 host.go:66] Checking if "no-preload-333352" exists ...
	I1213 11:46:55.086123  596998 cli_runner.go:164] Run: docker container inspect no-preload-333352 --format={{.State.Status}}
	I1213 11:46:55.089958  596998 out.go:179] * Verifying Kubernetes components...
	I1213 11:46:55.092885  596998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:46:55.118731  596998 addons.go:239] Setting addon default-storageclass=true in "no-preload-333352"
	I1213 11:46:55.118772  596998 host.go:66] Checking if "no-preload-333352" exists ...
	I1213 11:46:55.119210  596998 cli_runner.go:164] Run: docker container inspect no-preload-333352 --format={{.State.Status}}
	I1213 11:46:55.142748  596998 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:46:55.148113  596998 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 11:46:55.148237  596998 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:46:55.148248  596998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 11:46:55.148312  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:55.153556  596998 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 11:46:55.156401  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 11:46:55.156436  596998 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 11:46:55.156518  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:55.170900  596998 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 11:46:55.170922  596998 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 11:46:55.170990  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:55.202915  596998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:46:55.221059  596998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:46:55.234212  596998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:46:55.322944  596998 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:46:55.404599  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 11:46:55.404621  596998 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 11:46:55.410339  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:46:55.424868  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 11:46:55.424934  596998 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 11:46:55.437611  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:46:55.466532  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 11:46:55.466598  596998 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 11:46:55.533480  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 11:46:55.533543  596998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 11:46:55.558338  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 11:46:55.558404  596998 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 11:46:55.573707  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 11:46:55.573775  596998 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 11:46:55.586950  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 11:46:55.587019  596998 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 11:46:55.599876  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 11:46:55.599941  596998 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 11:46:55.613189  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:46:55.613214  596998 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 11:46:55.626391  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:46:56.170314  596998 node_ready.go:35] waiting up to 6m0s for node "no-preload-333352" to be "Ready" ...
	W1213 11:46:56.170674  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.170727  596998 retry.go:31] will retry after 321.378191ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:46:56.170779  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.170796  596998 retry.go:31] will retry after 211.981666ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:46:56.170985  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.171002  596998 retry.go:31] will retry after 239.070892ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.383589  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:46:56.411068  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:46:56.469548  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.469643  596998 retry.go:31] will retry after 394.603627ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:46:56.477518  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.477558  596998 retry.go:31] will retry after 498.653036ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.492479  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:46:56.550411  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.550465  596998 retry.go:31] will retry after 487.503108ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.865341  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:46:56.967936  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.968025  596998 retry.go:31] will retry after 717.718245ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.977052  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:46:57.038612  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:46:57.046035  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:57.046077  596998 retry.go:31] will retry after 431.172191ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:46:57.103477  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:57.103510  596998 retry.go:31] will retry after 495.110582ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:57.477604  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:46:57.542568  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:57.542608  596998 retry.go:31] will retry after 1.264774015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:57.599678  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:46:57.658440  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:57.658483  596998 retry.go:31] will retry after 976.781113ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:57.686351  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:46:57.782906  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:57.782941  596998 retry.go:31] will retry after 1.210299273s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:46:58.170918  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:46:58.635525  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:46:58.695473  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:58.695513  596998 retry.go:31] will retry after 770.527982ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:58.807674  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:46:58.874925  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:58.874962  596998 retry.go:31] will retry after 1.331403387s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:58.994063  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:46:59.058328  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:59.058362  596998 retry.go:31] will retry after 1.540138362s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:59.466331  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:46:59.526972  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:59.527006  596998 retry.go:31] will retry after 1.010658159s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:00.171512  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:00.206721  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:47:00.355103  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:00.355138  596998 retry.go:31] will retry after 2.476956922s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:00.538651  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:47:00.599510  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:47:00.607813  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:00.607842  596998 retry.go:31] will retry after 2.846567669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:00.671803  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:00.671834  596998 retry.go:31] will retry after 1.147758556s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:01.820380  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:47:01.879212  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:01.879244  596998 retry.go:31] will retry after 3.144985192s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:02.670957  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:02.832252  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:47:02.902734  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:02.902771  596998 retry.go:31] will retry after 3.378828885s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:03.455263  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:47:03.521452  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:03.521486  596998 retry.go:31] will retry after 3.23032482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:05.024515  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:47:05.083539  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:05.083572  596998 retry.go:31] will retry after 3.91018085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:05.171119  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:06.282348  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:47:06.342380  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:06.342417  596998 retry.go:31] will retry after 4.569051902s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:06.752192  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:47:06.812324  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:06.812362  596998 retry.go:31] will retry after 3.621339093s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:07.171170  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:08.994724  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:47:09.059715  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:09.059750  596998 retry.go:31] will retry after 3.336187079s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:09.171521  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:10.434821  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:47:10.527681  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:10.527715  596998 retry.go:31] will retry after 8.747216293s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:10.911760  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:47:10.973491  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:10.973530  596998 retry.go:31] will retry after 6.563764078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:11.671509  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:12.396136  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:47:12.451525  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:12.451555  596998 retry.go:31] will retry after 12.979902201s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:13.671774  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:16.171040  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:17.537629  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:47:17.605650  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:17.605686  596998 retry.go:31] will retry after 13.028008559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:18.171361  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:19.275997  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:47:19.342259  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:19.342290  596998 retry.go:31] will retry after 20.165472284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:20.671224  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:23.171107  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:25.171144  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:25.431592  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:47:25.517211  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:25.517246  596998 retry.go:31] will retry after 17.190857405s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:27.671038  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:29.671905  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:30.634538  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:47:30.747730  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:30.747766  596998 retry.go:31] will retry after 8.253172442s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:32.170901  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:34.170950  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:36.171702  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:38.671029  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:39.001281  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:47:39.065716  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:39.065747  596998 retry.go:31] will retry after 30.140073357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:39.508018  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:47:39.565709  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:39.565750  596998 retry.go:31] will retry after 13.258391709s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:41.170971  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:42.708360  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:47:42.777228  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:42.777262  596998 retry.go:31] will retry after 14.462485223s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:43.171411  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:45.171885  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:47.671008  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:50.170919  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:52.171024  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:52.825279  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:47:52.895300  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:52.895334  596998 retry.go:31] will retry after 42.53439734s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:54.171468  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:56.671010  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:57.240410  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:47:57.300003  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:57.300038  596998 retry.go:31] will retry after 43.551114065s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:58.671871  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:01.171150  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:03.671009  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:06.171060  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:08.670995  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:48:09.206164  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:48:09.266520  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:48:09.266558  596998 retry.go:31] will retry after 38.20317151s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:48:10.671430  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:12.671901  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:15.171124  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:17.671141  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:19.671553  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:22.170909  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:24.170990  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:26.171623  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:28.671096  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:31.171091  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:33.670895  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:48:35.430795  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:48:35.490443  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:48:35.490550  596998 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 11:48:35.670951  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:37.671093  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:40.171057  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:48:40.852278  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:48:40.916394  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:48:40.916509  596998 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 11:48:42.171144  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:44.171515  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:46.671821  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:48:47.470510  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:48:47.538580  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:48:47.538682  596998 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 11:48:47.541933  596998 out.go:179] * Enabled addons: 
	I1213 11:48:47.544738  596998 addons.go:530] duration metric: took 1m52.460244741s for enable addons: enabled=[]
	W1213 11:48:49.170971  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:51.171371  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:53.670885  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:55.671127  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:58.171050  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:00.171123  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:02.171184  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:04.670961  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:06.671604  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:09.171017  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:11.671001  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:13.671410  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:15.671910  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:18.171029  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:20.670977  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:23.170985  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:25.171248  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:27.670921  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:30.171027  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:32.171089  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:34.671060  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:37.170891  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:39.171056  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:41.670906  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:44.170836  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:46.171894  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:48.671002  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:51.170981  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:53.671005  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:55.671144  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:58.171063  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:00.671726  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:03.171383  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:05.171448  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:07.171694  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:09.671514  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:12.171022  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:14.670928  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:16.670984  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:19.170955  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:21.670902  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:23.671215  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:25.671545  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:28.171776  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:30.671175  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:33.171188  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:35.171254  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:37.171656  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:39.670878  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:41.670932  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:44.170877  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:46.171667  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:48.670818  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:50.670856  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:52.670921  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:55.171417  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:57.671390  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:00.171032  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:02.670957  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:04.671010  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:07.170942  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:09.171076  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:11.171630  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:13.671044  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:15.671569  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:18.171750  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:20.671044  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:23.171197  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:25.671191  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:28.171352  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:30.670915  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:32.671036  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:34.671254  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:36.671675  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:39.171023  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:41.671018  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:43.671262  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:45.673760  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:48.171125  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:50.670988  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:53.170873  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:55.171466  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:57.171707  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:59.671078  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:02.171305  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:04.171660  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:06.671628  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:09.170863  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:11.170983  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:13.670824  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:15.671074  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:18.170945  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:20.670832  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:23.170872  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:25.171346  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:27.171433  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:29.670795  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:31.671822  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:34.171181  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:36.670803  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:38.670951  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:41.170820  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:43.170883  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:45.172031  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:47.670782  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:50.171769  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:52.671828  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:55.170984  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:56.171498  596998 node_ready.go:38] duration metric: took 6m0.001140759s for node "no-preload-333352" to be "Ready" ...
	I1213 11:52:56.174587  596998 out.go:203] 
	W1213 11:52:56.177556  596998 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 11:52:56.177585  596998 out.go:285] * 
	* 
	W1213 11:52:56.179740  596998 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:52:56.182759  596998 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p no-preload-333352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-333352
helpers_test.go:244: (dbg) docker inspect no-preload-333352:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db",
	        "Created": "2025-12-13T11:36:44.52795509Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 597136,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:46:48.212033137Z",
	            "FinishedAt": "2025-12-13T11:46:46.812235669Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/hostname",
	        "HostsPath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/hosts",
	        "LogPath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db-json.log",
	        "Name": "/no-preload-333352",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-333352:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-333352",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db",
	                "LowerDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891-init/diff:/var/lib/docker/overlay2/5d0ca86320f2c06765995d644a73af30cd6ddb36f5c1a2a6b1ebb24af53cdabd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-333352",
	                "Source": "/var/lib/docker/volumes/no-preload-333352/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-333352",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-333352",
	                "name.minikube.sigs.k8s.io": "no-preload-333352",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "368f444acead1313629634c955e38e7aa3bb1a58261aa4f155fef5ab3cc6d2d9",
	            "SandboxKey": "/var/run/docker/netns/368f444acead",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-333352": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:92:40:ad:16:f6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ee20fc50f482b31273047147a2f419c36704bb98933537d0ac5901a560402043",
	                    "EndpointID": "c1aa6ce135257fa89e5e51421f21414b58021c38959e96fd72756c63a958cfdd",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-333352",
	                        "ca124efb8aeb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-333352 -n no-preload-333352
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-333352 -n no-preload-333352: exit status 2 (324.919007ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-333352 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ image   │ embed-certs-951675 image list --format=json                                                                                                                                                                                                                │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ pause   │ -p embed-certs-951675 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ unpause │ -p embed-certs-951675 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p embed-certs-951675                                                                                                                                                                                                                                      │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p embed-certs-951675                                                                                                                                                                                                                                      │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p disable-driver-mounts-823668                                                                                                                                                                                                                            │ disable-driver-mounts-823668 │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ start   │ -p default-k8s-diff-port-191845 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:40 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-191845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ stop    │ -p default-k8s-diff-port-191845 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-191845 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ start   │ -p default-k8s-diff-port-191845 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:41 UTC │
	│ image   │ default-k8s-diff-port-191845 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ pause   │ -p default-k8s-diff-port-191845 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ unpause │ -p default-k8s-diff-port-191845 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ delete  │ -p default-k8s-diff-port-191845                                                                                                                                                                                                                            │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ delete  │ -p default-k8s-diff-port-191845                                                                                                                                                                                                                            │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ start   │ -p newest-cni-796924 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-333352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ stop    │ -p no-preload-333352 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ addons  │ enable dashboard -p no-preload-333352 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ start   │ -p no-preload-333352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-796924 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │                     │
	│ stop    │ -p newest-cni-796924 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p newest-cni-796924 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p newest-cni-796924 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:51:48
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:51:48.463604  604010 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:51:48.463796  604010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:51:48.463823  604010 out.go:374] Setting ErrFile to fd 2...
	I1213 11:51:48.463842  604010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:51:48.464235  604010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 11:51:48.465119  604010 out.go:368] Setting JSON to false
	I1213 11:51:48.466102  604010 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":16461,"bootTime":1765610247,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 11:51:48.466204  604010 start.go:143] virtualization:  
	I1213 11:51:48.469444  604010 out.go:179] * [newest-cni-796924] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:51:48.473497  604010 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:51:48.473608  604010 notify.go:221] Checking for updates...
	I1213 11:51:48.479464  604010 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:51:48.482541  604010 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:51:48.485448  604010 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 11:51:48.488462  604010 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:51:48.491424  604010 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:51:48.494980  604010 config.go:182] Loaded profile config "newest-cni-796924": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:51:48.495553  604010 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:51:48.518013  604010 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:51:48.518194  604010 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:51:48.596406  604010 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:51:48.586781308 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:51:48.596541  604010 docker.go:319] overlay module found
	I1213 11:51:48.599865  604010 out.go:179] * Using the docker driver based on existing profile
	I1213 11:51:48.602647  604010 start.go:309] selected driver: docker
	I1213 11:51:48.602672  604010 start.go:927] validating driver "docker" against &{Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:51:48.602834  604010 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:51:48.603569  604010 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:51:48.671569  604010 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:51:48.654666754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:51:48.671930  604010 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 11:51:48.671965  604010 cni.go:84] Creating CNI manager for ""
	I1213 11:51:48.672022  604010 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:51:48.672078  604010 start.go:353] cluster config:
	{Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:51:48.675265  604010 out.go:179] * Starting "newest-cni-796924" primary control-plane node in "newest-cni-796924" cluster
	I1213 11:51:48.678207  604010 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 11:51:48.681114  604010 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:51:48.683920  604010 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:51:48.683976  604010 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 11:51:48.683989  604010 cache.go:65] Caching tarball of preloaded images
	I1213 11:51:48.684102  604010 preload.go:238] Found /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 11:51:48.684116  604010 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 11:51:48.684232  604010 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/config.json ...
	I1213 11:51:48.684464  604010 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:51:48.711458  604010 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:51:48.711481  604010 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:51:48.711496  604010 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:51:48.711527  604010 start.go:360] acquireMachinesLock for newest-cni-796924: {Name:mkb23dc851632c47983afd0f3cb215d071a4c6d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:51:48.711588  604010 start.go:364] duration metric: took 38.818µs to acquireMachinesLock for "newest-cni-796924"
	I1213 11:51:48.711608  604010 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:51:48.711613  604010 fix.go:54] fixHost starting: 
	I1213 11:51:48.711888  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:48.735758  604010 fix.go:112] recreateIfNeeded on newest-cni-796924: state=Stopped err=<nil>
	W1213 11:51:48.735799  604010 fix.go:138] unexpected machine state, will restart: <nil>
	W1213 11:51:48.171125  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:50.670988  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:51:48.739083  604010 out.go:252] * Restarting existing docker container for "newest-cni-796924" ...
	I1213 11:51:48.739191  604010 cli_runner.go:164] Run: docker start newest-cni-796924
	I1213 11:51:48.989234  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:49.013708  604010 kic.go:430] container "newest-cni-796924" state is running.
	I1213 11:51:49.014143  604010 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:51:49.035818  604010 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/config.json ...
	I1213 11:51:49.036044  604010 machine.go:94] provisionDockerMachine start ...
	I1213 11:51:49.036107  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:49.066663  604010 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:49.067143  604010 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1213 11:51:49.067157  604010 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:51:49.067832  604010 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47590->127.0.0.1:33440: read: connection reset by peer
	I1213 11:51:52.226322  604010 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-796924
	
	I1213 11:51:52.226353  604010 ubuntu.go:182] provisioning hostname "newest-cni-796924"
	I1213 11:51:52.226417  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:52.244890  604010 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:52.245240  604010 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1213 11:51:52.245259  604010 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-796924 && echo "newest-cni-796924" | sudo tee /etc/hostname
	I1213 11:51:52.409909  604010 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-796924
	
	I1213 11:51:52.410005  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:52.440908  604010 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:52.441219  604010 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1213 11:51:52.441235  604010 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-796924' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-796924/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-796924' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:51:52.595320  604010 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:51:52.595345  604010 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 11:51:52.595378  604010 ubuntu.go:190] setting up certificates
	I1213 11:51:52.595395  604010 provision.go:84] configureAuth start
	I1213 11:51:52.595456  604010 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:51:52.612730  604010 provision.go:143] copyHostCerts
	I1213 11:51:52.612805  604010 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 11:51:52.612815  604010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 11:51:52.612893  604010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 11:51:52.612991  604010 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 11:51:52.612997  604010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 11:51:52.613022  604010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 11:51:52.613072  604010 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 11:51:52.613077  604010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 11:51:52.613099  604010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 11:51:52.613145  604010 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.newest-cni-796924 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-796924]
	I1213 11:51:52.732846  604010 provision.go:177] copyRemoteCerts
	I1213 11:51:52.732930  604010 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:51:52.732973  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:52.750653  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:52.855439  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:51:52.874016  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 11:51:52.892129  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 11:51:52.911103  604010 provision.go:87] duration metric: took 315.684656ms to configureAuth
	I1213 11:51:52.911132  604010 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:51:52.911332  604010 config.go:182] Loaded profile config "newest-cni-796924": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:51:52.911340  604010 machine.go:97] duration metric: took 3.875289031s to provisionDockerMachine
	I1213 11:51:52.911347  604010 start.go:293] postStartSetup for "newest-cni-796924" (driver="docker")
	I1213 11:51:52.911359  604010 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:51:52.911407  604010 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:51:52.911460  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:52.929094  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:53.034971  604010 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:51:53.038558  604010 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:51:53.038590  604010 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:51:53.038602  604010 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 11:51:53.038659  604010 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 11:51:53.038763  604010 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 11:51:53.038874  604010 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:51:53.046532  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:51:53.064751  604010 start.go:296] duration metric: took 153.388066ms for postStartSetup
	I1213 11:51:53.064850  604010 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:51:53.064897  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:53.083055  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:53.186537  604010 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:51:53.194814  604010 fix.go:56] duration metric: took 4.483190974s for fixHost
	I1213 11:51:53.194902  604010 start.go:83] releasing machines lock for "newest-cni-796924", held for 4.483304896s
	I1213 11:51:53.195014  604010 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:51:53.218858  604010 ssh_runner.go:195] Run: cat /version.json
	I1213 11:51:53.218911  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:53.219425  604010 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:51:53.219496  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:53.245887  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:53.248082  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:53.440734  604010 ssh_runner.go:195] Run: systemctl --version
	I1213 11:51:53.447618  604010 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:51:53.452306  604010 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:51:53.452441  604010 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:51:53.460789  604010 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 11:51:53.460813  604010 start.go:496] detecting cgroup driver to use...
	I1213 11:51:53.460876  604010 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:51:53.460961  604010 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:51:53.478830  604010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:51:53.493048  604010 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:51:53.493110  604010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:51:53.509243  604010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:51:53.522928  604010 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:51:53.639237  604010 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:51:53.752852  604010 docker.go:234] disabling docker service ...
	I1213 11:51:53.752960  604010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:51:53.768708  604010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:51:53.782124  604010 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:51:53.903168  604010 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:51:54.054509  604010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:51:54.067985  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:51:54.083550  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 11:51:54.093447  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:51:54.102944  604010 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:51:54.103048  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:51:54.112424  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:51:54.121802  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:51:54.130945  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:51:54.140080  604010 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:51:54.148567  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:51:54.157935  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:51:54.167456  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:51:54.176969  604010 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:51:54.184730  604010 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:51:54.192410  604010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:51:54.297614  604010 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:51:54.415943  604010 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 11:51:54.416062  604010 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 11:51:54.419918  604010 start.go:564] Will wait 60s for crictl version
	I1213 11:51:54.420004  604010 ssh_runner.go:195] Run: which crictl
	I1213 11:51:54.424003  604010 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:51:54.449039  604010 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 11:51:54.449144  604010 ssh_runner.go:195] Run: containerd --version
	I1213 11:51:54.473383  604010 ssh_runner.go:195] Run: containerd --version
	I1213 11:51:54.499419  604010 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 11:51:54.502369  604010 cli_runner.go:164] Run: docker network inspect newest-cni-796924 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:51:54.518648  604010 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 11:51:54.522791  604010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:51:54.535931  604010 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 11:51:54.538956  604010 kubeadm.go:884] updating cluster {Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:51:54.539121  604010 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:51:54.539232  604010 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:51:54.563801  604010 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 11:51:54.563827  604010 containerd.go:534] Images already preloaded, skipping extraction
	I1213 11:51:54.563893  604010 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:51:54.592245  604010 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 11:51:54.592267  604010 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:51:54.592274  604010 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 11:51:54.592392  604010 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-796924 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:51:54.592461  604010 ssh_runner.go:195] Run: sudo crictl info
	I1213 11:51:54.621799  604010 cni.go:84] Creating CNI manager for ""
	I1213 11:51:54.621822  604010 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:51:54.621841  604010 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 11:51:54.621863  604010 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-796924 NodeName:newest-cni-796924 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:51:54.621977  604010 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-796924"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:51:54.622049  604010 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:51:54.629798  604010 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:51:54.629892  604010 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:51:54.637447  604010 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 11:51:54.650384  604010 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:51:54.666817  604010 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 11:51:54.689998  604010 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:51:54.695776  604010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:51:54.710482  604010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:51:54.832824  604010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:51:54.850492  604010 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924 for IP: 192.168.76.2
	I1213 11:51:54.850566  604010 certs.go:195] generating shared ca certs ...
	I1213 11:51:54.850597  604010 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:54.850790  604010 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 11:51:54.850872  604010 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 11:51:54.850895  604010 certs.go:257] generating profile certs ...
	I1213 11:51:54.851026  604010 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.key
	I1213 11:51:54.851129  604010 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key.ced45374
	I1213 11:51:54.851211  604010 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key
	I1213 11:51:54.851379  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 11:51:54.851441  604010 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 11:51:54.851467  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:51:54.851513  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:51:54.851568  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:51:54.851620  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 11:51:54.851698  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:51:54.852295  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:51:54.879994  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:51:54.900131  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:51:54.919515  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:51:54.939840  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 11:51:54.959348  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:51:54.977529  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:51:54.995648  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:51:55.023031  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 11:51:55.043814  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:51:55.063273  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 11:51:55.083198  604010 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:51:55.097732  604010 ssh_runner.go:195] Run: openssl version
	I1213 11:51:55.104458  604010 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 11:51:55.112443  604010 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 11:51:55.120212  604010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 11:51:55.124175  604010 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 11:51:55.124296  604010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 11:51:55.166612  604010 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:51:55.174931  604010 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:55.182763  604010 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:51:55.190655  604010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:55.194550  604010 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:55.194637  604010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:55.235820  604010 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:51:55.243647  604010 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 11:51:55.251252  604010 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 11:51:55.258979  604010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 11:51:55.263040  604010 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 11:51:55.263115  604010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 11:51:55.305815  604010 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:51:55.313358  604010 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:51:55.317228  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:51:55.358360  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:51:55.399354  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:51:55.440616  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:51:55.481788  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:51:55.527783  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:51:55.570548  604010 kubeadm.go:401] StartCluster: {Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:51:55.570648  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 11:51:55.570740  604010 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:51:55.597807  604010 cri.go:89] found id: ""
	I1213 11:51:55.597910  604010 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:51:55.605830  604010 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 11:51:55.605851  604010 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 11:51:55.605907  604010 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 11:51:55.613526  604010 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:51:55.614085  604010 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-796924" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:51:55.614332  604010 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-307042/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-796924" cluster setting kubeconfig missing "newest-cni-796924" context setting]
	I1213 11:51:55.614935  604010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:55.617326  604010 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 11:51:55.625376  604010 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1213 11:51:55.625455  604010 kubeadm.go:602] duration metric: took 19.59756ms to restartPrimaryControlPlane
	I1213 11:51:55.625473  604010 kubeadm.go:403] duration metric: took 54.935084ms to StartCluster
	I1213 11:51:55.625491  604010 settings.go:142] acquiring lock: {Name:mk079e9a25ebbc2c8fbae42d4c6ed096a652c00b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:55.625565  604010 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:51:55.626520  604010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:55.626793  604010 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 11:51:55.627185  604010 config.go:182] Loaded profile config "newest-cni-796924": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:51:55.627271  604010 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 11:51:55.627363  604010 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-796924"
	I1213 11:51:55.627383  604010 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-796924"
	I1213 11:51:55.627413  604010 host.go:66] Checking if "newest-cni-796924" exists ...
	I1213 11:51:55.627434  604010 addons.go:70] Setting dashboard=true in profile "newest-cni-796924"
	I1213 11:51:55.627450  604010 addons.go:239] Setting addon dashboard=true in "newest-cni-796924"
	W1213 11:51:55.627456  604010 addons.go:248] addon dashboard should already be in state true
	I1213 11:51:55.627477  604010 host.go:66] Checking if "newest-cni-796924" exists ...
	I1213 11:51:55.627878  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:55.628091  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:55.628783  604010 addons.go:70] Setting default-storageclass=true in profile "newest-cni-796924"
	I1213 11:51:55.628812  604010 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-796924"
	I1213 11:51:55.629112  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:55.631079  604010 out.go:179] * Verifying Kubernetes components...
	I1213 11:51:55.634139  604010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:51:55.667375  604010 addons.go:239] Setting addon default-storageclass=true in "newest-cni-796924"
	I1213 11:51:55.667423  604010 host.go:66] Checking if "newest-cni-796924" exists ...
	I1213 11:51:55.667842  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:55.688084  604010 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:51:55.691677  604010 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:51:55.691701  604010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 11:51:55.691785  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:55.697906  604010 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 11:51:55.697933  604010 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 11:51:55.698005  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:55.704903  604010 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 11:51:55.707765  604010 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1213 11:51:53.170873  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:55.171466  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:57.171707  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:51:55.710658  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 11:51:55.710701  604010 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 11:51:55.710771  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:55.754330  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:55.772597  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:55.773144  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:55.866635  604010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:51:55.926205  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:51:55.934055  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:51:55.957399  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 11:51:55.957444  604010 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 11:51:55.971225  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 11:51:55.971291  604010 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 11:51:56.007402  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 11:51:56.007444  604010 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 11:51:56.023097  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 11:51:56.023122  604010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 11:51:56.039306  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 11:51:56.039347  604010 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 11:51:56.054865  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 11:51:56.054892  604010 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 11:51:56.069056  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 11:51:56.069097  604010 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 11:51:56.083856  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 11:51:56.083885  604010 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 11:51:56.097577  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:51:56.097600  604010 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 11:51:56.111351  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:51:56.663977  604010 api_server.go:52] waiting for apiserver process to appear ...
	W1213 11:51:56.664058  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.664121  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:51:56.664172  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.664188  604010 retry.go:31] will retry after 289.236479ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.664122  604010 retry.go:31] will retry after 183.877549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:51:56.664453  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.664469  604010 retry.go:31] will retry after 218.899341ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.849187  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:51:56.883801  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:51:56.926668  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.926802  604010 retry.go:31] will retry after 241.089101ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.953849  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:51:56.985603  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.985688  604010 retry.go:31] will retry after 237.809149ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:51:57.026263  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.026297  604010 retry.go:31] will retry after 349.427803ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.164593  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:51:57.169067  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:51:57.224678  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:51:57.234523  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.234624  604010 retry.go:31] will retry after 787.051236ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:51:57.297371  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.297440  604010 retry.go:31] will retry after 317.469921ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.376456  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:51:57.452615  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.452649  604010 retry.go:31] will retry after 679.978714ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.616149  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:51:57.664727  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:51:57.701776  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.701820  604010 retry.go:31] will retry after 682.458958ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.022897  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:51:58.088105  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.088141  604010 retry.go:31] will retry after 475.463602ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.133516  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:51:58.165032  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:51:58.230626  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.230659  604010 retry.go:31] will retry after 634.421741ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.385149  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:51:58.461368  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.461471  604010 retry.go:31] will retry after 859.118132ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:51:59.671078  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:02.171305  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:51:58.564227  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:51:58.633858  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.633891  604010 retry.go:31] will retry after 1.632863719s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.665061  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:51:58.866071  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:51:58.936827  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.936859  604010 retry.go:31] will retry after 1.533813591s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:59.165263  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:51:59.321822  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:51:59.385607  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:59.385640  604010 retry.go:31] will retry after 2.101781304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:59.665231  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:00.164312  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:00.267962  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:52:00.471799  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:52:00.516223  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:00.516306  604010 retry.go:31] will retry after 1.542990826s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:52:00.569718  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:00.569762  604010 retry.go:31] will retry after 1.699392085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:00.664868  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:01.165071  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:01.487701  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:01.556576  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:01.556610  604010 retry.go:31] will retry after 1.79578881s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:01.665032  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:02.059588  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:02.123368  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:02.123421  604010 retry.go:31] will retry after 4.212258745s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:02.164643  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:02.270065  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:52:02.336655  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:02.336687  604010 retry.go:31] will retry after 2.291652574s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:02.665180  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:03.164491  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:03.353076  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:03.415819  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:03.415855  604010 retry.go:31] will retry after 3.520621119s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:52:04.171660  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:06.671628  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:03.664666  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:04.164990  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:04.629361  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:52:04.665164  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:04.695856  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:04.695887  604010 retry.go:31] will retry after 5.092647079s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:05.164583  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:05.665005  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:06.164298  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:06.336728  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:06.399256  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:06.399289  604010 retry.go:31] will retry after 2.548236052s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:06.664733  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:06.937128  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:07.007320  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:07.007359  604010 retry.go:31] will retry after 3.279734506s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:07.164482  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:07.664186  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:08.164259  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:09.170863  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:11.170983  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:08.664905  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:08.947682  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:09.039225  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:09.039255  604010 retry.go:31] will retry after 6.163469341s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:09.164651  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:09.664239  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:09.789499  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:52:09.850576  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:09.850610  604010 retry.go:31] will retry after 3.796434626s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:10.165090  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:10.288047  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:10.355227  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:10.355265  604010 retry.go:31] will retry after 7.010948619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:10.664471  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:11.165062  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:11.664272  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:12.164932  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:12.664657  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:13.164305  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:13.670824  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:15.671074  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:13.647328  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:52:13.664818  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:13.719910  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:13.719942  604010 retry.go:31] will retry after 9.330768854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:14.164344  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:14.664306  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:15.164242  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:15.203030  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:15.263577  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:15.263607  604010 retry.go:31] will retry after 8.190073233s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:15.664266  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:16.165207  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:16.664293  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:17.164467  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:17.367027  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:17.430899  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:17.430934  604010 retry.go:31] will retry after 13.887712507s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:17.664357  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:18.164881  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:18.170945  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:20.670832  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:18.664960  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:19.164308  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:19.665208  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:20.165105  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:20.664287  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:21.164362  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:21.664274  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:22.164288  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:22.665206  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:23.051577  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:52:23.111902  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:23.111935  604010 retry.go:31] will retry after 11.527342508s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:23.165176  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:23.453917  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:23.170872  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:25.171346  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:27.171433  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:23.521291  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:23.521324  604010 retry.go:31] will retry after 14.842315117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:23.664722  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:24.165113  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:24.664242  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:25.164277  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:25.664353  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:26.164245  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:26.664280  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:27.164344  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:27.664260  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:28.164294  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:29.670795  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:31.671822  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:28.664213  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:29.165160  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:29.664269  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:30.165128  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:30.664169  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:31.164314  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:31.319227  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:31.384220  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:31.384257  604010 retry.go:31] will retry after 14.168397615s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:31.664303  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:32.164990  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:32.664299  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:33.164301  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:34.171181  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:36.670803  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:33.664641  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:34.164270  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:34.639887  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:52:34.664451  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:34.713642  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:34.713678  604010 retry.go:31] will retry after 21.545330114s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:35.164160  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:35.665036  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:36.164253  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:36.664233  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:37.164426  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:37.664423  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:38.164585  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:38.364338  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:38.426452  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:38.426486  604010 retry.go:31] will retry after 16.958085374s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:52:38.670951  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:41.170820  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:38.665187  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:39.164590  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:39.665128  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:40.164295  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:40.664289  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:41.164238  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:41.664308  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:42.164562  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:42.664974  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:43.164327  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:43.170883  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:45.172031  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:47.670782  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:43.664236  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:44.164970  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:44.664271  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:45.164423  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:45.553023  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:45.614931  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:45.614965  604010 retry.go:31] will retry after 19.954026213s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:45.665141  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:46.164288  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:46.664717  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:47.164232  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:47.664844  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:48.164283  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:50.171769  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:52.671828  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:48.665063  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:49.164283  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:49.664430  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:50.165168  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:50.665085  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:51.164301  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:51.664309  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:52.165148  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:52.664704  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:53.164339  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:55.170984  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:56.171498  596998 node_ready.go:38] duration metric: took 6m0.001140759s for node "no-preload-333352" to be "Ready" ...
	I1213 11:52:56.174587  596998 out.go:203] 
	W1213 11:52:56.177556  596998 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 11:52:56.177585  596998 out.go:285] * 
	W1213 11:52:56.179740  596998 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:52:56.182759  596998 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.850948040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.850964713Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851002933Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851021788Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851032094Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851043467Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851052681Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851068796Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851086577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851121301Z" level=info msg="Connect containerd service"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851401698Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851964747Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.867726494Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.868237695Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.868561226Z" level=info msg="Start subscribing containerd event"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.868632505Z" level=info msg="Start recovering state"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.889278015Z" level=info msg="Start event monitor"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.889343254Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.889355102Z" level=info msg="Start streaming server"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.889372054Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.889392994Z" level=info msg="runtime interface starting up..."
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.889400510Z" level=info msg="starting plugins..."
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.889437261Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 11:46:53 no-preload-333352 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.891303551Z" level=info msg="containerd successfully booted in 0.061815s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:52:57.375775    3935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:57.376589    3935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:57.378176    3935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:57.378477    3935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:57.379963    3935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +12.907693] overlayfs: idmapped layers are currently not supported
	[Dec13 09:25] overlayfs: idmapped layers are currently not supported
	[ +26.192425] overlayfs: idmapped layers are currently not supported
	[Dec13 09:26] overlayfs: idmapped layers are currently not supported
	[ +25.729788] overlayfs: idmapped layers are currently not supported
	[Dec13 09:27] overlayfs: idmapped layers are currently not supported
	[Dec13 09:28] overlayfs: idmapped layers are currently not supported
	[Dec13 09:31] overlayfs: idmapped layers are currently not supported
	[Dec13 09:32] overlayfs: idmapped layers are currently not supported
	[ +17.979093] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec13 09:43] overlayfs: idmapped layers are currently not supported
	[Dec13 09:45] overlayfs: idmapped layers are currently not supported
	[ +25.885102] overlayfs: idmapped layers are currently not supported
	[Dec13 09:46] overlayfs: idmapped layers are currently not supported
	[ +22.078149] overlayfs: idmapped layers are currently not supported
	[Dec13 09:47] overlayfs: idmapped layers are currently not supported
	[Dec13 09:48] overlayfs: idmapped layers are currently not supported
	[Dec13 09:49] overlayfs: idmapped layers are currently not supported
	[Dec13 09:51] overlayfs: idmapped layers are currently not supported
	[ +17.043564] overlayfs: idmapped layers are currently not supported
	[Dec13 09:52] overlayfs: idmapped layers are currently not supported
	[Dec13 09:53] overlayfs: idmapped layers are currently not supported
	[Dec13 10:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:19] hrtimer: interrupt took 21247146 ns
	
	
	==> kernel <==
	 11:52:57 up  4:35,  0 user,  load average: 0.65, 0.80, 1.37
	Linux no-preload-333352 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 11:52:53 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:52:54 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 480.
	Dec 13 11:52:54 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:52:54 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:52:54 no-preload-333352 kubelet[3811]: E1213 11:52:54.725840    3811 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:52:54 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:52:54 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:52:55 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 13 11:52:55 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:52:55 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:52:55 no-preload-333352 kubelet[3816]: E1213 11:52:55.472972    3816 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:52:55 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:52:55 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:52:56 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 13 11:52:56 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:52:56 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:52:56 no-preload-333352 kubelet[3822]: E1213 11:52:56.319418    3822 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:52:56 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:52:56 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:52:57 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 13 11:52:57 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:52:57 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:52:57 no-preload-333352 kubelet[3904]: E1213 11:52:57.240187    3904 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:52:57 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:52:57 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-333352 -n no-preload-333352
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-333352 -n no-preload-333352: exit status 2 (331.716107ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-333352" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (369.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (104.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-796924 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1213 11:50:08.209113  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/old-k8s-version-624185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:50:12.240581  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:50:12.404104  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/default-k8s-diff-port-191845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:50:40.104337  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/default-k8s-diff-port-191845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-796924 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m42.352551116s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-796924 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-796924
helpers_test.go:244: (dbg) docker inspect newest-cni-796924:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273",
	        "Created": "2025-12-13T11:41:45.560617227Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 589565,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:41:45.628321439Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273/hostname",
	        "HostsPath": "/var/lib/docker/containers/27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273/hosts",
	        "LogPath": "/var/lib/docker/containers/27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273/27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273-json.log",
	        "Name": "/newest-cni-796924",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-796924:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-796924",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273",
	                "LowerDir": "/var/lib/docker/overlay2/e3273dd39227f26c709bd7f69a32b4360acb8650246414f8098119d5f28e0f70-init/diff:/var/lib/docker/overlay2/5d0ca86320f2c06765995d644a73af30cd6ddb36f5c1a2a6b1ebb24af53cdabd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e3273dd39227f26c709bd7f69a32b4360acb8650246414f8098119d5f28e0f70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e3273dd39227f26c709bd7f69a32b4360acb8650246414f8098119d5f28e0f70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e3273dd39227f26c709bd7f69a32b4360acb8650246414f8098119d5f28e0f70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-796924",
	                "Source": "/var/lib/docker/volumes/newest-cni-796924/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-796924",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-796924",
	                "name.minikube.sigs.k8s.io": "newest-cni-796924",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "92d11ff764680cdd62555d8da891c50ecfe321b3d8620a2e9bb3f0c5bfca4c60",
	            "SandboxKey": "/var/run/docker/netns/92d11ff76468",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-796924": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:c8:11:0f:14:22",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "524b54a7afb58fdfadc2532a94da198ca12aafc23248ec4905999b39dfe064e0",
	                    "EndpointID": "99474f614f6ae76108238f2f77b9e4272618bc5ea1a8c7ccb8cffa8255291355",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-796924",
	                        "27aba94e8ede"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-796924 -n newest-cni-796924
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-796924 -n newest-cni-796924: exit status 6 (374.353624ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 11:51:45.588299  603476 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-796924" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-796924 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ stop    │ -p embed-certs-951675 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:38 UTC │
	│ addons  │ enable dashboard -p embed-certs-951675 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:38 UTC │
	│ start   │ -p embed-certs-951675 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:39 UTC │
	│ image   │ embed-certs-951675 image list --format=json                                                                                                                                                                                                                │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ pause   │ -p embed-certs-951675 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ unpause │ -p embed-certs-951675 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p embed-certs-951675                                                                                                                                                                                                                                      │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p embed-certs-951675                                                                                                                                                                                                                                      │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p disable-driver-mounts-823668                                                                                                                                                                                                                            │ disable-driver-mounts-823668 │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ start   │ -p default-k8s-diff-port-191845 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:40 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-191845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ stop    │ -p default-k8s-diff-port-191845 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-191845 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ start   │ -p default-k8s-diff-port-191845 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:41 UTC │
	│ image   │ default-k8s-diff-port-191845 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ pause   │ -p default-k8s-diff-port-191845 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ unpause │ -p default-k8s-diff-port-191845 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ delete  │ -p default-k8s-diff-port-191845                                                                                                                                                                                                                            │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ delete  │ -p default-k8s-diff-port-191845                                                                                                                                                                                                                            │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ start   │ -p newest-cni-796924 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-333352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ stop    │ -p no-preload-333352 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ addons  │ enable dashboard -p no-preload-333352 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ start   │ -p no-preload-333352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-796924 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:46:47
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:46:47.931970  596998 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:46:47.932200  596998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:46:47.932224  596998 out.go:374] Setting ErrFile to fd 2...
	I1213 11:46:47.932243  596998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:46:47.932512  596998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 11:46:47.932893  596998 out.go:368] Setting JSON to false
	I1213 11:46:47.933847  596998 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":16161,"bootTime":1765610247,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 11:46:47.933941  596998 start.go:143] virtualization:  
	I1213 11:46:47.936853  596998 out.go:179] * [no-preload-333352] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:46:47.940791  596998 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:46:47.940972  596998 notify.go:221] Checking for updates...
	I1213 11:46:47.944715  596998 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:46:47.948724  596998 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:46:47.952032  596998 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 11:46:47.955654  596998 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:46:47.958860  596998 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:46:47.962467  596998 config.go:182] Loaded profile config "no-preload-333352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:46:47.963200  596998 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:46:47.998152  596998 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:46:47.998290  596998 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:46:48.062434  596998 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:46:48.052493365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:46:48.062546  596998 docker.go:319] overlay module found
	I1213 11:46:48.065709  596998 out.go:179] * Using the docker driver based on existing profile
	I1213 11:46:48.068574  596998 start.go:309] selected driver: docker
	I1213 11:46:48.068598  596998 start.go:927] validating driver "docker" against &{Name:no-preload-333352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-333352 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:46:48.068700  596998 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:46:48.069441  596998 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:46:48.125553  596998 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:46:48.115368398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:46:48.125930  596998 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:46:48.125957  596998 cni.go:84] Creating CNI manager for ""
	I1213 11:46:48.126004  596998 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:46:48.126038  596998 start.go:353] cluster config:
	{Name:no-preload-333352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-333352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:46:48.129462  596998 out.go:179] * Starting "no-preload-333352" primary control-plane node in "no-preload-333352" cluster
	I1213 11:46:48.132280  596998 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 11:46:48.135249  596998 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:46:48.138117  596998 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:46:48.138171  596998 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:46:48.138307  596998 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/config.json ...
	I1213 11:46:48.138613  596998 cache.go:107] acquiring lock: {Name:mk31a59cdc41332147a99da115e762325d4c0338 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.138751  596998 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1213 11:46:48.138763  596998 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 161.618µs
	I1213 11:46:48.138777  596998 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1213 11:46:48.138765  596998 cache.go:107] acquiring lock: {Name:mk2ae32cc20ed4877d34af62f362936effddd88e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.138790  596998 cache.go:107] acquiring lock: {Name:mkc81502ef492ecd96689a43cd1ba75bb4269f1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.138812  596998 cache.go:107] acquiring lock: {Name:mk8c5f5248a840d1f1002cf2ef82275f7d10aa22 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.138842  596998 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1213 11:46:48.138848  596998 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 60.267µs
	I1213 11:46:48.138854  596998 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1213 11:46:48.138851  596998 cache.go:107] acquiring lock: {Name:mk35ccdf3fe56b66e694c71ff2d919f143d8dacc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.138862  596998 cache.go:107] acquiring lock: {Name:mk23fe723c287cca56429f89071149f1d96bb4dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.138892  596998 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1213 11:46:48.138901  596998 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 51.398µs
	I1213 11:46:48.138905  596998 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1213 11:46:48.138908  596998 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1213 11:46:48.138894  596998 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1213 11:46:48.138912  596998 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 101.006µs
	I1213 11:46:48.138918  596998 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1213 11:46:48.138918  596998 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 56.075µs
	I1213 11:46:48.138924  596998 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1213 11:46:48.138940  596998 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1213 11:46:48.138947  596998 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 191.059µs
	I1213 11:46:48.138952  596998 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1213 11:46:48.138934  596998 cache.go:107] acquiring lock: {Name:mkc6bf22ce18468a92a774694a4b49cbc277f1ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.138948  596998 cache.go:107] acquiring lock: {Name:mk26d49691f1ca365a0728b2ae008656f80369ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.138975  596998 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1213 11:46:48.138980  596998 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 47.803µs
	I1213 11:46:48.138985  596998 cache.go:115] /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1213 11:46:48.138986  596998 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1213 11:46:48.138992  596998 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 44.924µs
	I1213 11:46:48.138999  596998 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1213 11:46:48.139013  596998 cache.go:87] Successfully saved all images to host disk.
	I1213 11:46:48.157619  596998 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:46:48.157642  596998 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:46:48.157658  596998 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:46:48.157688  596998 start.go:360] acquireMachinesLock for no-preload-333352: {Name:mkcf6f110441e125d79b38a8f8cc1a9606a821b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:46:48.157750  596998 start.go:364] duration metric: took 36.333µs to acquireMachinesLock for "no-preload-333352"
	I1213 11:46:48.157773  596998 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:46:48.157778  596998 fix.go:54] fixHost starting: 
	I1213 11:46:48.158031  596998 cli_runner.go:164] Run: docker container inspect no-preload-333352 --format={{.State.Status}}
	I1213 11:46:48.175033  596998 fix.go:112] recreateIfNeeded on no-preload-333352: state=Stopped err=<nil>
	W1213 11:46:48.175073  596998 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:46:48.180317  596998 out.go:252] * Restarting existing docker container for "no-preload-333352" ...
	I1213 11:46:48.180439  596998 cli_runner.go:164] Run: docker start no-preload-333352
	I1213 11:46:48.429680  596998 cli_runner.go:164] Run: docker container inspect no-preload-333352 --format={{.State.Status}}
	I1213 11:46:48.453033  596998 kic.go:430] container "no-preload-333352" state is running.
	I1213 11:46:48.453454  596998 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-333352
	I1213 11:46:48.479808  596998 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/config.json ...
	I1213 11:46:48.480042  596998 machine.go:94] provisionDockerMachine start ...
	I1213 11:46:48.480102  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:48.503420  596998 main.go:143] libmachine: Using SSH client type: native
	I1213 11:46:48.503750  596998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1213 11:46:48.503759  596998 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:46:48.504579  596998 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 11:46:51.658471  596998 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-333352
	
	I1213 11:46:51.658499  596998 ubuntu.go:182] provisioning hostname "no-preload-333352"
	I1213 11:46:51.658568  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:51.680359  596998 main.go:143] libmachine: Using SSH client type: native
	I1213 11:46:51.680665  596998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1213 11:46:51.680681  596998 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-333352 && echo "no-preload-333352" | sudo tee /etc/hostname
	I1213 11:46:51.840345  596998 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-333352
	
	I1213 11:46:51.840432  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:51.858862  596998 main.go:143] libmachine: Using SSH client type: native
	I1213 11:46:51.859190  596998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1213 11:46:51.859212  596998 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-333352' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-333352/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-333352' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:46:52.011439  596998 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:46:52.011473  596998 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 11:46:52.011517  596998 ubuntu.go:190] setting up certificates
	I1213 11:46:52.011538  596998 provision.go:84] configureAuth start
	I1213 11:46:52.011606  596998 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-333352
	I1213 11:46:52.030543  596998 provision.go:143] copyHostCerts
	I1213 11:46:52.030630  596998 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 11:46:52.030645  596998 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 11:46:52.030900  596998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 11:46:52.031021  596998 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 11:46:52.031034  596998 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 11:46:52.031064  596998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 11:46:52.031134  596998 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 11:46:52.031144  596998 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 11:46:52.031169  596998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 11:46:52.031226  596998 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.no-preload-333352 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-333352]
	I1213 11:46:52.199052  596998 provision.go:177] copyRemoteCerts
	I1213 11:46:52.199122  596998 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:46:52.199163  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:52.218347  596998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:46:52.322404  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:46:52.340755  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 11:46:52.358393  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:46:52.375864  596998 provision.go:87] duration metric: took 364.299362ms to configureAuth
	I1213 11:46:52.375890  596998 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:46:52.376105  596998 config.go:182] Loaded profile config "no-preload-333352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:46:52.376112  596998 machine.go:97] duration metric: took 3.896062654s to provisionDockerMachine
	I1213 11:46:52.376121  596998 start.go:293] postStartSetup for "no-preload-333352" (driver="docker")
	I1213 11:46:52.376132  596998 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:46:52.376180  596998 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:46:52.376225  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:52.393759  596998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:46:52.503058  596998 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:46:52.506632  596998 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:46:52.506662  596998 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:46:52.506674  596998 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 11:46:52.506753  596998 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 11:46:52.506839  596998 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 11:46:52.506949  596998 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:46:52.514878  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:46:52.533605  596998 start.go:296] duration metric: took 157.452449ms for postStartSetup
	I1213 11:46:52.533696  596998 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:46:52.533746  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:52.551775  596998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:46:52.655971  596998 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:46:52.661022  596998 fix.go:56] duration metric: took 4.503236152s for fixHost
	I1213 11:46:52.661051  596998 start.go:83] releasing machines lock for "no-preload-333352", held for 4.503288469s
	I1213 11:46:52.661123  596998 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-333352
	I1213 11:46:52.678128  596998 ssh_runner.go:195] Run: cat /version.json
	I1213 11:46:52.678192  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:52.678486  596998 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:46:52.678544  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:52.698809  596998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:46:52.701663  596998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:46:52.895685  596998 ssh_runner.go:195] Run: systemctl --version
	I1213 11:46:52.902479  596998 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:46:52.907001  596998 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:46:52.907123  596998 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:46:52.915282  596998 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 11:46:52.915312  596998 start.go:496] detecting cgroup driver to use...
	I1213 11:46:52.915343  596998 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:46:52.915421  596998 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:46:52.933908  596998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:46:52.947931  596998 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:46:52.947999  596998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:46:52.963993  596998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:46:52.977424  596998 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:46:53.103160  596998 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:46:53.238170  596998 docker.go:234] disabling docker service ...
	I1213 11:46:53.238265  596998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:46:53.257118  596998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:46:53.272790  596998 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:46:53.410295  596998 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:46:53.530871  596998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:46:53.544130  596998 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:46:53.559695  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 11:46:53.568863  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:46:53.578325  596998 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:46:53.578399  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:46:53.588010  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:46:53.597447  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:46:53.606673  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:46:53.616093  596998 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:46:53.624546  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:46:53.633591  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:46:53.642957  596998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:46:53.652128  596998 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:46:53.659821  596998 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:46:53.667713  596998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:46:53.790713  596998 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:46:53.892894  596998 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 11:46:53.893007  596998 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 11:46:53.896921  596998 start.go:564] Will wait 60s for crictl version
	I1213 11:46:53.897007  596998 ssh_runner.go:195] Run: which crictl
	I1213 11:46:53.900594  596998 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:46:53.944666  596998 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 11:46:53.944790  596998 ssh_runner.go:195] Run: containerd --version
	I1213 11:46:53.967810  596998 ssh_runner.go:195] Run: containerd --version
	I1213 11:46:53.993455  596998 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 11:46:53.996395  596998 cli_runner.go:164] Run: docker network inspect no-preload-333352 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:46:54.023910  596998 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 11:46:54.028455  596998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:46:54.039026  596998 kubeadm.go:884] updating cluster {Name:no-preload-333352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-333352 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:46:54.039148  596998 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:46:54.039201  596998 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:46:54.065782  596998 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 11:46:54.065805  596998 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:46:54.065813  596998 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 11:46:54.065928  596998 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-333352 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-333352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:46:54.066000  596998 ssh_runner.go:195] Run: sudo crictl info
	I1213 11:46:54.093275  596998 cni.go:84] Creating CNI manager for ""
	I1213 11:46:54.093302  596998 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:46:54.093325  596998 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 11:46:54.093349  596998 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-333352 NodeName:no-preload-333352 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:46:54.093537  596998 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-333352"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:46:54.093645  596998 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:46:54.101713  596998 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:46:54.101784  596998 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:46:54.109422  596998 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 11:46:54.122555  596998 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:46:54.135656  596998 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 11:46:54.148334  596998 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:46:54.151958  596998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:46:54.162210  596998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:46:54.287595  596998 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:46:54.305395  596998 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352 for IP: 192.168.85.2
	I1213 11:46:54.305417  596998 certs.go:195] generating shared ca certs ...
	I1213 11:46:54.305434  596998 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:46:54.305583  596998 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 11:46:54.305641  596998 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 11:46:54.305653  596998 certs.go:257] generating profile certs ...
	I1213 11:46:54.305755  596998 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/client.key
	I1213 11:46:54.305817  596998 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/apiserver.key.cd574fc3
	I1213 11:46:54.305860  596998 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/proxy-client.key
	I1213 11:46:54.305974  596998 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 11:46:54.306019  596998 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 11:46:54.306031  596998 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:46:54.306061  596998 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:46:54.306090  596998 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:46:54.306117  596998 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 11:46:54.306193  596998 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:46:54.306893  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:46:54.331092  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:46:54.350803  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:46:54.368808  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:46:54.387679  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 11:46:54.404957  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:46:54.422566  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:46:54.440705  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:46:54.458444  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 11:46:54.476249  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 11:46:54.494025  596998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:46:54.512671  596998 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:46:54.526330  596998 ssh_runner.go:195] Run: openssl version
	I1213 11:46:54.532951  596998 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 11:46:54.540955  596998 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 11:46:54.548967  596998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 11:46:54.552993  596998 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 11:46:54.553060  596998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 11:46:54.596516  596998 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:46:54.604001  596998 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:46:54.611355  596998 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:46:54.618889  596998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:46:54.622912  596998 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:46:54.623031  596998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:46:54.665964  596998 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:46:54.674514  596998 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 11:46:54.683052  596998 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 11:46:54.691830  596998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 11:46:54.696558  596998 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 11:46:54.696685  596998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 11:46:54.739286  596998 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:46:54.747030  596998 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:46:54.751301  596998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:46:54.792521  596998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:46:54.848244  596998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:46:54.897199  596998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:46:54.938465  596998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:46:54.979853  596998 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:46:55.021716  596998 kubeadm.go:401] StartCluster: {Name:no-preload-333352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-333352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:46:55.021819  596998 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 11:46:55.021905  596998 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:46:55.050987  596998 cri.go:89] found id: ""
	I1213 11:46:55.051064  596998 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:46:55.059300  596998 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 11:46:55.059321  596998 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 11:46:55.059393  596998 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 11:46:55.066981  596998 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:46:55.067384  596998 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-333352" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:46:55.067494  596998 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-307042/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-333352" cluster setting kubeconfig missing "no-preload-333352" context setting]
	I1213 11:46:55.067794  596998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:46:55.069069  596998 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 11:46:55.083063  596998 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 11:46:55.083096  596998 kubeadm.go:602] duration metric: took 23.769764ms to restartPrimaryControlPlane
	I1213 11:46:55.083110  596998 kubeadm.go:403] duration metric: took 61.40393ms to StartCluster
	I1213 11:46:55.083126  596998 settings.go:142] acquiring lock: {Name:mk079e9a25ebbc2c8fbae42d4c6ed096a652c00b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:46:55.083190  596998 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:46:55.083859  596998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:46:55.084085  596998 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 11:46:55.084484  596998 config.go:182] Loaded profile config "no-preload-333352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:46:55.084498  596998 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 11:46:55.084648  596998 addons.go:70] Setting storage-provisioner=true in profile "no-preload-333352"
	I1213 11:46:55.084663  596998 addons.go:239] Setting addon storage-provisioner=true in "no-preload-333352"
	I1213 11:46:55.084673  596998 addons.go:70] Setting dashboard=true in profile "no-preload-333352"
	I1213 11:46:55.084687  596998 addons.go:239] Setting addon dashboard=true in "no-preload-333352"
	W1213 11:46:55.084692  596998 addons.go:248] addon dashboard should already be in state true
	I1213 11:46:55.084699  596998 addons.go:70] Setting default-storageclass=true in profile "no-preload-333352"
	I1213 11:46:55.084713  596998 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-333352"
	I1213 11:46:55.084715  596998 host.go:66] Checking if "no-preload-333352" exists ...
	I1213 11:46:55.085024  596998 cli_runner.go:164] Run: docker container inspect no-preload-333352 --format={{.State.Status}}
	I1213 11:46:55.085259  596998 cli_runner.go:164] Run: docker container inspect no-preload-333352 --format={{.State.Status}}
	I1213 11:46:55.084693  596998 host.go:66] Checking if "no-preload-333352" exists ...
	I1213 11:46:55.086123  596998 cli_runner.go:164] Run: docker container inspect no-preload-333352 --format={{.State.Status}}
	I1213 11:46:55.089958  596998 out.go:179] * Verifying Kubernetes components...
	I1213 11:46:55.092885  596998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:46:55.118731  596998 addons.go:239] Setting addon default-storageclass=true in "no-preload-333352"
	I1213 11:46:55.118772  596998 host.go:66] Checking if "no-preload-333352" exists ...
	I1213 11:46:55.119210  596998 cli_runner.go:164] Run: docker container inspect no-preload-333352 --format={{.State.Status}}
	I1213 11:46:55.142748  596998 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:46:55.148113  596998 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 11:46:55.148237  596998 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:46:55.148248  596998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 11:46:55.148312  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:55.153556  596998 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 11:46:55.156401  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 11:46:55.156436  596998 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 11:46:55.156518  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:55.170900  596998 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 11:46:55.170922  596998 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 11:46:55.170990  596998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-333352
	I1213 11:46:55.202915  596998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:46:55.221059  596998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:46:55.234212  596998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/no-preload-333352/id_rsa Username:docker}
	I1213 11:46:55.322944  596998 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:46:55.404599  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 11:46:55.404621  596998 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 11:46:55.410339  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:46:55.424868  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 11:46:55.424934  596998 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 11:46:55.437611  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:46:55.466532  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 11:46:55.466598  596998 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 11:46:55.533480  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 11:46:55.533543  596998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 11:46:55.558338  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 11:46:55.558404  596998 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 11:46:55.573707  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 11:46:55.573775  596998 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 11:46:55.586950  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 11:46:55.587019  596998 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 11:46:55.599876  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 11:46:55.599941  596998 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 11:46:55.613189  596998 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:46:55.613214  596998 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 11:46:55.626391  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:46:56.170314  596998 node_ready.go:35] waiting up to 6m0s for node "no-preload-333352" to be "Ready" ...
	W1213 11:46:56.170674  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.170727  596998 retry.go:31] will retry after 321.378191ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:46:56.170779  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.170796  596998 retry.go:31] will retry after 211.981666ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:46:56.170985  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.171002  596998 retry.go:31] will retry after 239.070892ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.383589  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:46:56.411068  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:46:56.469548  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.469643  596998 retry.go:31] will retry after 394.603627ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:46:56.477518  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.477558  596998 retry.go:31] will retry after 498.653036ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.492479  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:46:56.550411  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.550465  596998 retry.go:31] will retry after 487.503108ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.865341  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:46:56.967936  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.968025  596998 retry.go:31] will retry after 717.718245ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:56.977052  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:46:57.038612  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:46:57.046035  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:57.046077  596998 retry.go:31] will retry after 431.172191ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:46:57.103477  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:57.103510  596998 retry.go:31] will retry after 495.110582ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:57.477604  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:46:57.542568  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:57.542608  596998 retry.go:31] will retry after 1.264774015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:57.599678  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:46:57.658440  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:57.658483  596998 retry.go:31] will retry after 976.781113ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:57.686351  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:46:57.782906  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:57.782941  596998 retry.go:31] will retry after 1.210299273s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:46:58.170918  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:46:58.635525  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:46:58.695473  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:58.695513  596998 retry.go:31] will retry after 770.527982ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:58.807674  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:46:58.874925  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:58.874962  596998 retry.go:31] will retry after 1.331403387s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:58.994063  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:46:59.058328  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:59.058362  596998 retry.go:31] will retry after 1.540138362s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:59.466331  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:46:59.526972  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:46:59.527006  596998 retry.go:31] will retry after 1.010658159s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:00.171512  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:00.206721  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:47:00.355103  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:00.355138  596998 retry.go:31] will retry after 2.476956922s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:00.538651  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:47:00.599510  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:47:00.607813  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:00.607842  596998 retry.go:31] will retry after 2.846567669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:00.671803  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:00.671834  596998 retry.go:31] will retry after 1.147758556s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:01.820380  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:47:01.879212  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:01.879244  596998 retry.go:31] will retry after 3.144985192s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:02.670957  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:02.832252  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:47:02.902734  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:02.902771  596998 retry.go:31] will retry after 3.378828885s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:03.455263  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:47:03.521452  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:03.521486  596998 retry.go:31] will retry after 3.23032482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:05.024515  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:47:05.083539  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:05.083572  596998 retry.go:31] will retry after 3.91018085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:05.171119  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:06.282348  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:47:06.342380  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:06.342417  596998 retry.go:31] will retry after 4.569051902s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:06.752192  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:47:06.812324  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:06.812362  596998 retry.go:31] will retry after 3.621339093s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:07.171170  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:08.994724  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:47:09.059715  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:09.059750  596998 retry.go:31] will retry after 3.336187079s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:09.171521  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:10.434821  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:47:10.527681  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:10.527715  596998 retry.go:31] will retry after 8.747216293s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:10.911760  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:47:10.973491  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:10.973530  596998 retry.go:31] will retry after 6.563764078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:11.671509  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:12.396136  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:47:12.451525  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:12.451555  596998 retry.go:31] will retry after 12.979902201s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:13.671774  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:16.171040  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:17.537629  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:47:17.605650  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:17.605686  596998 retry.go:31] will retry after 13.028008559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:18.171361  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:19.275997  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:47:19.342259  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:19.342290  596998 retry.go:31] will retry after 20.165472284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:20.671224  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:23.171107  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:25.171144  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:25.431592  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:47:25.517211  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:25.517246  596998 retry.go:31] will retry after 17.190857405s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:27.671038  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:29.671905  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:30.634538  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:47:30.747730  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:30.747766  596998 retry.go:31] will retry after 8.253172442s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:32.170901  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:34.170950  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:36.171702  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:38.671029  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:39.001281  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:47:39.065716  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:39.065747  596998 retry.go:31] will retry after 30.140073357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:39.508018  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:47:39.565709  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:39.565750  596998 retry.go:31] will retry after 13.258391709s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:41.170971  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:42.708360  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:47:42.777228  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:42.777262  596998 retry.go:31] will retry after 14.462485223s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:43.171411  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:45.171885  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:47.671008  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:50.170919  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:52.171024  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:52.825279  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:47:52.895300  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:52.895334  596998 retry.go:31] will retry after 42.53439734s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:54.171468  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:47:56.671010  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:47:57.240410  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:47:57.300003  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:47:57.300038  596998 retry.go:31] will retry after 43.551114065s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:47:58.671871  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:01.171150  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:03.671009  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:06.171060  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:08.670995  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:48:09.206164  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:48:09.266520  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:48:09.266558  596998 retry.go:31] will retry after 38.20317151s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:48:10.671430  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:12.671901  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:15.171124  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:17.671141  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:19.671553  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:22.170909  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:24.170990  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:26.171623  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:28.671096  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:31.171091  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:33.670895  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:48:35.430795  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:48:35.490443  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:48:35.490550  596998 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 11:48:35.670951  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:37.671093  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:40.171057  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:48:40.852278  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:48:40.916394  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:48:40.916509  596998 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 11:48:42.171144  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:44.171515  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:46.671821  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:48:47.470510  596998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:48:47.538580  596998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:48:47.538682  596998 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 11:48:47.541933  596998 out.go:179] * Enabled addons: 
	I1213 11:48:47.544738  596998 addons.go:530] duration metric: took 1m52.460244741s for enable addons: enabled=[]
	W1213 11:48:49.170971  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:51.171371  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:53.670885  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:55.671127  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:48:58.171050  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:00.171123  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:02.171184  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:04.670961  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:06.671604  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:09.171017  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:11.671001  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:13.671410  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:15.671910  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:18.171029  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:20.670977  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:23.170985  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:25.171248  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:27.670921  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:30.171027  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:32.171089  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:34.671060  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:37.170891  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:39.171056  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:41.670906  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:44.170836  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:46.171894  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:48.671002  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:51.170981  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:53.671005  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:49:55.671144  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:50:00.707869  589123 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000374701s
	I1213 11:50:00.707898  589123 kubeadm.go:319] 
	I1213 11:50:00.707956  589123 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:50:00.707990  589123 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:50:00.708096  589123 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:50:00.708101  589123 kubeadm.go:319] 
	I1213 11:50:00.708207  589123 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:50:00.708239  589123 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:50:00.708270  589123 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:50:00.708274  589123 kubeadm.go:319] 
	I1213 11:50:00.719023  589123 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:50:00.719530  589123 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:50:00.719698  589123 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:50:00.720025  589123 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:50:00.720042  589123 kubeadm.go:319] 
	I1213 11:50:00.720173  589123 kubeadm.go:403] duration metric: took 8m6.761683072s to StartCluster
	I1213 11:50:00.720209  589123 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:50:00.720274  589123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:50:00.720362  589123 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 11:50:00.755118  589123 cri.go:89] found id: ""
	I1213 11:50:00.755161  589123 logs.go:282] 0 containers: []
	W1213 11:50:00.755171  589123 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:50:00.755178  589123 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:50:00.755246  589123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:50:00.781097  589123 cri.go:89] found id: ""
	I1213 11:50:00.781120  589123 logs.go:282] 0 containers: []
	W1213 11:50:00.781128  589123 logs.go:284] No container was found matching "etcd"
	I1213 11:50:00.781134  589123 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:50:00.781192  589123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:50:00.806528  589123 cri.go:89] found id: ""
	I1213 11:50:00.806552  589123 logs.go:282] 0 containers: []
	W1213 11:50:00.806559  589123 logs.go:284] No container was found matching "coredns"
	I1213 11:50:00.806566  589123 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:50:00.806623  589123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:50:00.836428  589123 cri.go:89] found id: ""
	I1213 11:50:00.836452  589123 logs.go:282] 0 containers: []
	W1213 11:50:00.836460  589123 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:50:00.836466  589123 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:50:00.836530  589123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:50:00.860830  589123 cri.go:89] found id: ""
	I1213 11:50:00.860898  589123 logs.go:282] 0 containers: []
	W1213 11:50:00.860915  589123 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:50:00.860922  589123 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:50:00.860991  589123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:50:00.886194  589123 cri.go:89] found id: ""
	I1213 11:50:00.886222  589123 logs.go:282] 0 containers: []
	W1213 11:50:00.886230  589123 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:50:00.886237  589123 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:50:00.886298  589123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:50:00.911416  589123 cri.go:89] found id: ""
	I1213 11:50:00.911442  589123 logs.go:282] 0 containers: []
	W1213 11:50:00.911451  589123 logs.go:284] No container was found matching "kindnet"
	I1213 11:50:00.911461  589123 logs.go:123] Gathering logs for dmesg ...
	I1213 11:50:00.911494  589123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:50:00.927545  589123 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:50:00.927575  589123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:50:00.994023  589123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:50:00.985916    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:00.986512    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:00.988048    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:00.988526    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:00.990075    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:50:00.985916    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:00.986512    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:00.988048    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:00.988526    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:50:00.990075    4847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:50:00.994047  589123 logs.go:123] Gathering logs for containerd ...
	I1213 11:50:00.994060  589123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:50:01.033895  589123 logs.go:123] Gathering logs for container status ...
	I1213 11:50:01.033932  589123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:50:01.062457  589123 logs.go:123] Gathering logs for kubelet ...
	I1213 11:50:01.062485  589123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 11:50:01.120952  589123 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000374701s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 11:50:01.121022  589123 out.go:285] * 
	W1213 11:50:01.121080  589123 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000374701s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:50:01.121096  589123 out.go:285] * 
	W1213 11:50:01.123307  589123 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:50:01.129091  589123 out.go:203] 
	W1213 11:50:01.132826  589123 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000374701s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:50:01.132880  589123 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 11:50:01.132907  589123 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 11:50:01.136752  589123 out.go:203] 
	W1213 11:49:58.171063  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:00.671726  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:03.171383  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:05.171448  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:07.171694  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:09.671514  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:12.171022  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:14.670928  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:16.670984  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:19.170955  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:21.670902  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:23.671215  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:25.671545  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:28.171776  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:30.671175  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:33.171188  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:35.171254  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:37.171656  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:39.670878  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:41.670932  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:44.170877  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:46.171667  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:48.670818  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:50.670856  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:52.670921  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:55.171417  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:50:57.671390  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:00.171032  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:02.670957  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:04.671010  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:07.170942  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:09.171076  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:11.171630  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:13.671044  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:15.671569  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:18.171750  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:20.671044  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:23.171197  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:25.671191  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:28.171352  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:30.670915  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:32.671036  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:34.671254  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:36.671675  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:39.171023  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:41.671018  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.265909379Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.265936489Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.265986500Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.266013774Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.266024572Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.266037405Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.266046923Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.266058050Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.266074567Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.266104909Z" level=info msg="Connect containerd service"
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.266428859Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.267145379Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.278775042Z" level=info msg="Start subscribing containerd event"
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.278930251Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.279075164Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.279020098Z" level=info msg="Start recovering state"
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.316986614Z" level=info msg="Start event monitor"
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.317036821Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.317046774Z" level=info msg="Start streaming server"
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.317056440Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.317064596Z" level=info msg="runtime interface starting up..."
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.317071119Z" level=info msg="starting plugins..."
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.317082492Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 11:41:52 newest-cni-796924 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 11:41:52 newest-cni-796924 containerd[757]: time="2025-12-13T11:41:52.319226019Z" level=info msg="containerd successfully booted in 0.078209s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:51:46.324989    5996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:51:46.325470    5996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:51:46.327558    5996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:51:46.327938    5996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:51:46.329140    5996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +12.907693] overlayfs: idmapped layers are currently not supported
	[Dec13 09:25] overlayfs: idmapped layers are currently not supported
	[ +26.192425] overlayfs: idmapped layers are currently not supported
	[Dec13 09:26] overlayfs: idmapped layers are currently not supported
	[ +25.729788] overlayfs: idmapped layers are currently not supported
	[Dec13 09:27] overlayfs: idmapped layers are currently not supported
	[Dec13 09:28] overlayfs: idmapped layers are currently not supported
	[Dec13 09:31] overlayfs: idmapped layers are currently not supported
	[Dec13 09:32] overlayfs: idmapped layers are currently not supported
	[ +17.979093] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec13 09:43] overlayfs: idmapped layers are currently not supported
	[Dec13 09:45] overlayfs: idmapped layers are currently not supported
	[ +25.885102] overlayfs: idmapped layers are currently not supported
	[Dec13 09:46] overlayfs: idmapped layers are currently not supported
	[ +22.078149] overlayfs: idmapped layers are currently not supported
	[Dec13 09:47] overlayfs: idmapped layers are currently not supported
	[Dec13 09:48] overlayfs: idmapped layers are currently not supported
	[Dec13 09:49] overlayfs: idmapped layers are currently not supported
	[Dec13 09:51] overlayfs: idmapped layers are currently not supported
	[ +17.043564] overlayfs: idmapped layers are currently not supported
	[Dec13 09:52] overlayfs: idmapped layers are currently not supported
	[Dec13 09:53] overlayfs: idmapped layers are currently not supported
	[Dec13 10:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:19] hrtimer: interrupt took 21247146 ns
	
	
	==> kernel <==
	 11:51:46 up  4:34,  0 user,  load average: 0.62, 0.81, 1.42
	Linux newest-cni-796924 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 11:51:42 newest-cni-796924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:51:43 newest-cni-796924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 457.
	Dec 13 11:51:43 newest-cni-796924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:51:43 newest-cni-796924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:51:43 newest-cni-796924 kubelet[5874]: E1213 11:51:43.474556    5874 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:51:43 newest-cni-796924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:51:43 newest-cni-796924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:51:44 newest-cni-796924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 458.
	Dec 13 11:51:44 newest-cni-796924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:51:44 newest-cni-796924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:51:44 newest-cni-796924 kubelet[5880]: E1213 11:51:44.230434    5880 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:51:44 newest-cni-796924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:51:44 newest-cni-796924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:51:44 newest-cni-796924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 459.
	Dec 13 11:51:44 newest-cni-796924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:51:44 newest-cni-796924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:51:44 newest-cni-796924 kubelet[5886]: E1213 11:51:44.980671    5886 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:51:44 newest-cni-796924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:51:44 newest-cni-796924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:51:45 newest-cni-796924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 460.
	Dec 13 11:51:45 newest-cni-796924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:51:45 newest-cni-796924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:51:45 newest-cni-796924 kubelet[5912]: E1213 11:51:45.769133    5912 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:51:45 newest-cni-796924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:51:45 newest-cni-796924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-796924 -n newest-cni-796924
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-796924 -n newest-cni-796924: exit status 6 (341.177649ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 11:51:46.888992  603707 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-796924" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-796924" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (104.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (374.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-796924 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-796924 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 105 (6m8.885666291s)

                                                
                                                
-- stdout --
	* [newest-cni-796924] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "newest-cni-796924" primary control-plane node in "newest-cni-796924" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:51:48.463604  604010 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:51:48.463796  604010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:51:48.463823  604010 out.go:374] Setting ErrFile to fd 2...
	I1213 11:51:48.463842  604010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:51:48.464235  604010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 11:51:48.465119  604010 out.go:368] Setting JSON to false
	I1213 11:51:48.466102  604010 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":16461,"bootTime":1765610247,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 11:51:48.466204  604010 start.go:143] virtualization:  
	I1213 11:51:48.469444  604010 out.go:179] * [newest-cni-796924] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:51:48.473497  604010 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:51:48.473608  604010 notify.go:221] Checking for updates...
	I1213 11:51:48.479464  604010 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:51:48.482541  604010 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:51:48.485448  604010 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 11:51:48.488462  604010 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:51:48.491424  604010 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:51:48.494980  604010 config.go:182] Loaded profile config "newest-cni-796924": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:51:48.495553  604010 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:51:48.518013  604010 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:51:48.518194  604010 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:51:48.596406  604010 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:51:48.586781308 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:51:48.596541  604010 docker.go:319] overlay module found
	I1213 11:51:48.599865  604010 out.go:179] * Using the docker driver based on existing profile
	I1213 11:51:48.602647  604010 start.go:309] selected driver: docker
	I1213 11:51:48.602672  604010 start.go:927] validating driver "docker" against &{Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:51:48.602834  604010 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:51:48.603569  604010 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:51:48.671569  604010 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:51:48.654666754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:51:48.671930  604010 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 11:51:48.671965  604010 cni.go:84] Creating CNI manager for ""
	I1213 11:51:48.672022  604010 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:51:48.672078  604010 start.go:353] cluster config:
	{Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:51:48.675265  604010 out.go:179] * Starting "newest-cni-796924" primary control-plane node in "newest-cni-796924" cluster
	I1213 11:51:48.678207  604010 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 11:51:48.681114  604010 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:51:48.683920  604010 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:51:48.683976  604010 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 11:51:48.683989  604010 cache.go:65] Caching tarball of preloaded images
	I1213 11:51:48.684102  604010 preload.go:238] Found /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 11:51:48.684116  604010 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 11:51:48.684232  604010 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/config.json ...
	I1213 11:51:48.684464  604010 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:51:48.711458  604010 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:51:48.711481  604010 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:51:48.711496  604010 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:51:48.711527  604010 start.go:360] acquireMachinesLock for newest-cni-796924: {Name:mkb23dc851632c47983afd0f3cb215d071a4c6d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:51:48.711588  604010 start.go:364] duration metric: took 38.818µs to acquireMachinesLock for "newest-cni-796924"
	I1213 11:51:48.711608  604010 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:51:48.711613  604010 fix.go:54] fixHost starting: 
	I1213 11:51:48.711888  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:48.735758  604010 fix.go:112] recreateIfNeeded on newest-cni-796924: state=Stopped err=<nil>
	W1213 11:51:48.735799  604010 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:51:48.739083  604010 out.go:252] * Restarting existing docker container for "newest-cni-796924" ...
	I1213 11:51:48.739191  604010 cli_runner.go:164] Run: docker start newest-cni-796924
	I1213 11:51:48.989234  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:49.013708  604010 kic.go:430] container "newest-cni-796924" state is running.
	I1213 11:51:49.014143  604010 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:51:49.035818  604010 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/config.json ...
	I1213 11:51:49.036044  604010 machine.go:94] provisionDockerMachine start ...
	I1213 11:51:49.036107  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:49.066663  604010 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:49.067143  604010 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1213 11:51:49.067157  604010 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:51:49.067832  604010 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47590->127.0.0.1:33440: read: connection reset by peer
	I1213 11:51:52.226322  604010 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-796924
	
	I1213 11:51:52.226353  604010 ubuntu.go:182] provisioning hostname "newest-cni-796924"
	I1213 11:51:52.226417  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:52.244890  604010 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:52.245240  604010 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1213 11:51:52.245259  604010 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-796924 && echo "newest-cni-796924" | sudo tee /etc/hostname
	I1213 11:51:52.409909  604010 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-796924
	
	I1213 11:51:52.410005  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:52.440908  604010 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:52.441219  604010 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1213 11:51:52.441235  604010 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-796924' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-796924/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-796924' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:51:52.595320  604010 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:51:52.595345  604010 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 11:51:52.595378  604010 ubuntu.go:190] setting up certificates
	I1213 11:51:52.595395  604010 provision.go:84] configureAuth start
	I1213 11:51:52.595456  604010 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:51:52.612730  604010 provision.go:143] copyHostCerts
	I1213 11:51:52.612805  604010 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 11:51:52.612815  604010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 11:51:52.612893  604010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 11:51:52.612991  604010 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 11:51:52.612997  604010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 11:51:52.613022  604010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 11:51:52.613072  604010 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 11:51:52.613077  604010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 11:51:52.613099  604010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 11:51:52.613145  604010 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.newest-cni-796924 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-796924]
	I1213 11:51:52.732846  604010 provision.go:177] copyRemoteCerts
	I1213 11:51:52.732930  604010 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:51:52.732973  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:52.750653  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:52.855439  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:51:52.874016  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 11:51:52.892129  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 11:51:52.911103  604010 provision.go:87] duration metric: took 315.684656ms to configureAuth
	I1213 11:51:52.911132  604010 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:51:52.911332  604010 config.go:182] Loaded profile config "newest-cni-796924": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:51:52.911340  604010 machine.go:97] duration metric: took 3.875289031s to provisionDockerMachine
	I1213 11:51:52.911347  604010 start.go:293] postStartSetup for "newest-cni-796924" (driver="docker")
	I1213 11:51:52.911359  604010 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:51:52.911407  604010 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:51:52.911460  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:52.929094  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:53.034971  604010 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:51:53.038558  604010 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:51:53.038590  604010 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:51:53.038602  604010 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 11:51:53.038659  604010 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 11:51:53.038763  604010 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 11:51:53.038874  604010 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:51:53.046532  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:51:53.064751  604010 start.go:296] duration metric: took 153.388066ms for postStartSetup
	I1213 11:51:53.064850  604010 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:51:53.064897  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:53.083055  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:53.186537  604010 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:51:53.194814  604010 fix.go:56] duration metric: took 4.483190974s for fixHost
	I1213 11:51:53.194902  604010 start.go:83] releasing machines lock for "newest-cni-796924", held for 4.483304896s
	I1213 11:51:53.195014  604010 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:51:53.218858  604010 ssh_runner.go:195] Run: cat /version.json
	I1213 11:51:53.218911  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:53.219425  604010 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:51:53.219496  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:53.245887  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:53.248082  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:53.440734  604010 ssh_runner.go:195] Run: systemctl --version
	I1213 11:51:53.447618  604010 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:51:53.452306  604010 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:51:53.452441  604010 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:51:53.460789  604010 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 11:51:53.460813  604010 start.go:496] detecting cgroup driver to use...
	I1213 11:51:53.460876  604010 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:51:53.460961  604010 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:51:53.478830  604010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:51:53.493048  604010 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:51:53.493110  604010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:51:53.509243  604010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:51:53.522928  604010 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:51:53.639237  604010 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:51:53.752852  604010 docker.go:234] disabling docker service ...
	I1213 11:51:53.752960  604010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:51:53.768708  604010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:51:53.782124  604010 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:51:53.903168  604010 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:51:54.054509  604010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:51:54.067985  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:51:54.083550  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 11:51:54.093447  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:51:54.102944  604010 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:51:54.103048  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:51:54.112424  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:51:54.121802  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:51:54.130945  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:51:54.140080  604010 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:51:54.148567  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:51:54.157935  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:51:54.167456  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:51:54.176969  604010 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:51:54.184730  604010 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:51:54.192410  604010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:51:54.297614  604010 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:51:54.415943  604010 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 11:51:54.416062  604010 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 11:51:54.419918  604010 start.go:564] Will wait 60s for crictl version
	I1213 11:51:54.420004  604010 ssh_runner.go:195] Run: which crictl
	I1213 11:51:54.424003  604010 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:51:54.449039  604010 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 11:51:54.449144  604010 ssh_runner.go:195] Run: containerd --version
	I1213 11:51:54.473383  604010 ssh_runner.go:195] Run: containerd --version
	I1213 11:51:54.499419  604010 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 11:51:54.502369  604010 cli_runner.go:164] Run: docker network inspect newest-cni-796924 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:51:54.518648  604010 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 11:51:54.522791  604010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:51:54.535931  604010 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 11:51:54.538956  604010 kubeadm.go:884] updating cluster {Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:51:54.539121  604010 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:51:54.539232  604010 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:51:54.563801  604010 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 11:51:54.563827  604010 containerd.go:534] Images already preloaded, skipping extraction
	I1213 11:51:54.563893  604010 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:51:54.592245  604010 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 11:51:54.592267  604010 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:51:54.592274  604010 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 11:51:54.592392  604010 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-796924 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:51:54.592461  604010 ssh_runner.go:195] Run: sudo crictl info
	I1213 11:51:54.621799  604010 cni.go:84] Creating CNI manager for ""
	I1213 11:51:54.621822  604010 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:51:54.621841  604010 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 11:51:54.621863  604010 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-796924 NodeName:newest-cni-796924 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:51:54.621977  604010 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-796924"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:51:54.622049  604010 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:51:54.629798  604010 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:51:54.629892  604010 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:51:54.637447  604010 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 11:51:54.650384  604010 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:51:54.666817  604010 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 11:51:54.689998  604010 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:51:54.695776  604010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:51:54.710482  604010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:51:54.832824  604010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:51:54.850492  604010 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924 for IP: 192.168.76.2
	I1213 11:51:54.850566  604010 certs.go:195] generating shared ca certs ...
	I1213 11:51:54.850597  604010 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:54.850790  604010 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 11:51:54.850872  604010 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 11:51:54.850895  604010 certs.go:257] generating profile certs ...
	I1213 11:51:54.851026  604010 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.key
	I1213 11:51:54.851129  604010 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key.ced45374
	I1213 11:51:54.851211  604010 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key
	I1213 11:51:54.851379  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 11:51:54.851441  604010 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 11:51:54.851467  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:51:54.851513  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:51:54.851568  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:51:54.851620  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 11:51:54.851698  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:51:54.852295  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:51:54.879994  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:51:54.900131  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:51:54.919515  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:51:54.939840  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 11:51:54.959348  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:51:54.977529  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:51:54.995648  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:51:55.023031  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 11:51:55.043814  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:51:55.063273  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 11:51:55.083198  604010 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:51:55.097732  604010 ssh_runner.go:195] Run: openssl version
	I1213 11:51:55.104458  604010 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 11:51:55.112443  604010 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 11:51:55.120212  604010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 11:51:55.124175  604010 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 11:51:55.124296  604010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 11:51:55.166612  604010 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:51:55.174931  604010 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:55.182763  604010 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:51:55.190655  604010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:55.194550  604010 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:55.194637  604010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:55.235820  604010 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:51:55.243647  604010 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 11:51:55.251252  604010 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 11:51:55.258979  604010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 11:51:55.263040  604010 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 11:51:55.263115  604010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 11:51:55.305815  604010 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:51:55.313358  604010 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:51:55.317228  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:51:55.358360  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:51:55.399354  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:51:55.440616  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:51:55.481788  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:51:55.527783  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:51:55.570548  604010 kubeadm.go:401] StartCluster: {Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:51:55.570648  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 11:51:55.570740  604010 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:51:55.597807  604010 cri.go:89] found id: ""
	I1213 11:51:55.597910  604010 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:51:55.605830  604010 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 11:51:55.605851  604010 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 11:51:55.605907  604010 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 11:51:55.613526  604010 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:51:55.614085  604010 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-796924" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:51:55.614332  604010 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-307042/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-796924" cluster setting kubeconfig missing "newest-cni-796924" context setting]
	I1213 11:51:55.614935  604010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:55.617326  604010 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 11:51:55.625376  604010 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1213 11:51:55.625455  604010 kubeadm.go:602] duration metric: took 19.59756ms to restartPrimaryControlPlane
	I1213 11:51:55.625473  604010 kubeadm.go:403] duration metric: took 54.935084ms to StartCluster
	I1213 11:51:55.625491  604010 settings.go:142] acquiring lock: {Name:mk079e9a25ebbc2c8fbae42d4c6ed096a652c00b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:55.625565  604010 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:51:55.626520  604010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:55.626793  604010 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 11:51:55.627185  604010 config.go:182] Loaded profile config "newest-cni-796924": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:51:55.627271  604010 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 11:51:55.627363  604010 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-796924"
	I1213 11:51:55.627383  604010 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-796924"
	I1213 11:51:55.627413  604010 host.go:66] Checking if "newest-cni-796924" exists ...
	I1213 11:51:55.627434  604010 addons.go:70] Setting dashboard=true in profile "newest-cni-796924"
	I1213 11:51:55.627450  604010 addons.go:239] Setting addon dashboard=true in "newest-cni-796924"
	W1213 11:51:55.627456  604010 addons.go:248] addon dashboard should already be in state true
	I1213 11:51:55.627477  604010 host.go:66] Checking if "newest-cni-796924" exists ...
	I1213 11:51:55.627878  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:55.628091  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:55.628783  604010 addons.go:70] Setting default-storageclass=true in profile "newest-cni-796924"
	I1213 11:51:55.628812  604010 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-796924"
	I1213 11:51:55.629112  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:55.631079  604010 out.go:179] * Verifying Kubernetes components...
	I1213 11:51:55.634139  604010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:51:55.667375  604010 addons.go:239] Setting addon default-storageclass=true in "newest-cni-796924"
	I1213 11:51:55.667423  604010 host.go:66] Checking if "newest-cni-796924" exists ...
	I1213 11:51:55.667842  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:55.688084  604010 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:51:55.691677  604010 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:51:55.691701  604010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 11:51:55.691785  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:55.697906  604010 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 11:51:55.697933  604010 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 11:51:55.698005  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:55.704903  604010 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 11:51:55.707765  604010 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 11:51:55.710658  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 11:51:55.710701  604010 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 11:51:55.710771  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:55.754330  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:55.772597  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:55.773144  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:55.866635  604010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:51:55.926205  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:51:55.934055  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:51:55.957399  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 11:51:55.957444  604010 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 11:51:55.971225  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 11:51:55.971291  604010 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 11:51:56.007402  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 11:51:56.007444  604010 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 11:51:56.023097  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 11:51:56.023122  604010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 11:51:56.039306  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 11:51:56.039347  604010 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 11:51:56.054865  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 11:51:56.054892  604010 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 11:51:56.069056  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 11:51:56.069097  604010 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 11:51:56.083856  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 11:51:56.083885  604010 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 11:51:56.097577  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:51:56.097600  604010 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 11:51:56.111351  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:51:56.663977  604010 api_server.go:52] waiting for apiserver process to appear ...
	W1213 11:51:56.664058  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.664121  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:51:56.664172  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.664188  604010 retry.go:31] will retry after 289.236479ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.664122  604010 retry.go:31] will retry after 183.877549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:51:56.664453  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.664469  604010 retry.go:31] will retry after 218.899341ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.849187  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:51:56.883801  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:51:56.926668  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.926802  604010 retry.go:31] will retry after 241.089101ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.953849  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:51:56.985603  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.985688  604010 retry.go:31] will retry after 237.809149ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:51:57.026263  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.026297  604010 retry.go:31] will retry after 349.427803ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.164593  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:51:57.169067  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:51:57.224678  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:51:57.234523  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.234624  604010 retry.go:31] will retry after 787.051236ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:51:57.297371  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.297440  604010 retry.go:31] will retry after 317.469921ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.376456  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:51:57.452615  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.452649  604010 retry.go:31] will retry after 679.978714ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.616149  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:51:57.664727  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:51:57.701776  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.701820  604010 retry.go:31] will retry after 682.458958ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.022897  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:51:58.088105  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.088141  604010 retry.go:31] will retry after 475.463602ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.133516  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:51:58.165032  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:51:58.230626  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.230659  604010 retry.go:31] will retry after 634.421741ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.385149  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:51:58.461368  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.461471  604010 retry.go:31] will retry after 859.118132ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.564227  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:51:58.633858  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.633891  604010 retry.go:31] will retry after 1.632863719s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.665061  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:51:58.866071  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:51:58.936827  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.936859  604010 retry.go:31] will retry after 1.533813591s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:59.165263  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:51:59.321822  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:51:59.385607  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:59.385640  604010 retry.go:31] will retry after 2.101781304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:59.665231  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:00.164312  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:00.267962  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:52:00.471799  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:52:00.516223  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:00.516306  604010 retry.go:31] will retry after 1.542990826s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:52:00.569718  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:00.569762  604010 retry.go:31] will retry after 1.699392085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:00.664868  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:01.165071  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:01.487701  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:01.556576  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:01.556610  604010 retry.go:31] will retry after 1.79578881s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:01.665032  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:02.059588  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:02.123368  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:02.123421  604010 retry.go:31] will retry after 4.212258745s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:02.164643  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:02.270065  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:52:02.336655  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:02.336687  604010 retry.go:31] will retry after 2.291652574s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:02.665180  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:03.164491  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:03.353076  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:03.415819  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:03.415855  604010 retry.go:31] will retry after 3.520621119s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:03.664666  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:04.164990  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:04.629361  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:52:04.665164  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:04.695856  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:04.695887  604010 retry.go:31] will retry after 5.092647079s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:05.164583  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:05.665005  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:06.164298  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:06.336728  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:06.399256  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:06.399289  604010 retry.go:31] will retry after 2.548236052s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:06.664733  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:06.937128  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:07.007320  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:07.007359  604010 retry.go:31] will retry after 3.279734506s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:07.164482  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:07.664186  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:08.164259  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:08.664905  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:08.947682  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:09.039225  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:09.039255  604010 retry.go:31] will retry after 6.163469341s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:09.164651  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:09.664239  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:09.789499  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:52:09.850576  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:09.850610  604010 retry.go:31] will retry after 3.796434626s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:10.165090  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:10.288047  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:10.355227  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:10.355265  604010 retry.go:31] will retry after 7.010948619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:10.664471  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:11.165062  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:11.664272  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:12.164932  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:12.664657  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:13.164305  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:13.647328  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:52:13.664818  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:13.719910  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:13.719942  604010 retry.go:31] will retry after 9.330768854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:14.164344  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:14.664306  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:15.164242  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:15.203030  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:15.263577  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:15.263607  604010 retry.go:31] will retry after 8.190073233s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:15.664266  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:16.165207  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:16.664293  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:17.164467  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:17.367027  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:17.430899  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:17.430934  604010 retry.go:31] will retry after 13.887712507s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:17.664357  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:18.164881  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:18.664960  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:19.164308  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:19.665208  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:20.165105  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:20.664287  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:21.164362  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:21.664274  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:22.164288  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:22.665206  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:23.051577  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:52:23.111902  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:23.111935  604010 retry.go:31] will retry after 11.527342508s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:23.165176  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:23.453917  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:23.521291  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:23.521324  604010 retry.go:31] will retry after 14.842315117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:23.664722  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:24.165113  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:24.664242  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:25.164277  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:25.664353  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:26.164245  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:26.664280  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:27.164344  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:27.664260  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:28.164294  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:28.664213  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:29.165160  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:29.664269  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:30.165128  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:30.664169  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:31.164314  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:31.319227  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:31.384220  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:31.384257  604010 retry.go:31] will retry after 14.168397615s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:31.664303  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:32.164990  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:32.664299  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:33.164301  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:33.664641  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:34.164270  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:34.639887  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:52:34.664451  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:34.713642  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:34.713678  604010 retry.go:31] will retry after 21.545330114s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:35.164160  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:35.665036  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:36.164253  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:36.664233  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:37.164426  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:37.664423  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:38.164585  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:38.364338  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:38.426452  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:38.426486  604010 retry.go:31] will retry after 16.958085374s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:38.665187  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:39.164590  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:39.665128  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:40.164295  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:40.664289  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:41.164238  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:41.664308  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:42.164562  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:42.664974  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:43.164327  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:43.664236  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:44.164970  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:44.664271  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:45.164423  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:45.553023  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:45.614931  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:45.614965  604010 retry.go:31] will retry after 19.954026213s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:45.665141  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:46.164288  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:46.664717  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:47.164232  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:47.664844  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:48.164283  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:48.665063  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:49.164283  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:49.664430  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:50.165168  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:50.665085  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:51.164301  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:51.664309  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:52.165148  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:52.664704  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:53.164339  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:53.664699  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:54.164840  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:54.664218  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:55.165093  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:55.385630  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:55.504689  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:55.504722  604010 retry.go:31] will retry after 37.277266145s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:55.664229  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:52:55.664327  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:52:55.694796  604010 cri.go:89] found id: ""
	I1213 11:52:55.694825  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.694835  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:52:55.694843  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:52:55.694903  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:52:55.723663  604010 cri.go:89] found id: ""
	I1213 11:52:55.723688  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.723697  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:52:55.723704  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:52:55.723763  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:52:55.748991  604010 cri.go:89] found id: ""
	I1213 11:52:55.749019  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.749027  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:52:55.749034  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:52:55.749096  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:52:55.774258  604010 cri.go:89] found id: ""
	I1213 11:52:55.774281  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.774290  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:52:55.774297  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:52:55.774355  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:52:55.798762  604010 cri.go:89] found id: ""
	I1213 11:52:55.798788  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.798796  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:52:55.798802  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:52:55.798861  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:52:55.823037  604010 cri.go:89] found id: ""
	I1213 11:52:55.823063  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.823071  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:52:55.823078  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:52:55.823139  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:52:55.847241  604010 cri.go:89] found id: ""
	I1213 11:52:55.847267  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.847276  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:52:55.847283  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:52:55.847343  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:52:55.872394  604010 cri.go:89] found id: ""
	I1213 11:52:55.872464  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.872488  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:52:55.872505  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:52:55.872518  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:52:55.888592  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:52:55.888623  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:52:55.954582  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:52:55.945990    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.946863    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.948347    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.948763    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.950227    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:52:55.945990    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.946863    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.948347    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.948763    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.950227    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:52:55.954616  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:52:55.954629  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:52:55.979360  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:52:55.979393  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:52:56.015953  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:52:56.015986  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:52:56.262345  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:52:56.407172  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:56.407203  604010 retry.go:31] will retry after 30.096993011s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:58.574217  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:58.585863  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:52:58.585937  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:52:58.613052  604010 cri.go:89] found id: ""
	I1213 11:52:58.613084  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.613094  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:52:58.613102  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:52:58.613187  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:52:58.639217  604010 cri.go:89] found id: ""
	I1213 11:52:58.639241  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.639250  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:52:58.639256  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:52:58.639323  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:52:58.691503  604010 cri.go:89] found id: ""
	I1213 11:52:58.691529  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.691539  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:52:58.691545  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:52:58.691607  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:52:58.739302  604010 cri.go:89] found id: ""
	I1213 11:52:58.739330  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.739339  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:52:58.739345  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:52:58.739407  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:52:58.768957  604010 cri.go:89] found id: ""
	I1213 11:52:58.768985  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.768994  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:52:58.769001  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:52:58.769114  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:52:58.794144  604010 cri.go:89] found id: ""
	I1213 11:52:58.794172  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.794181  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:52:58.794188  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:52:58.794248  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:52:58.818208  604010 cri.go:89] found id: ""
	I1213 11:52:58.818234  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.818243  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:52:58.818250  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:52:58.818307  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:52:58.841575  604010 cri.go:89] found id: ""
	I1213 11:52:58.841600  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.841613  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:52:58.841622  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:52:58.841636  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:52:58.867434  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:52:58.867469  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:52:58.898944  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:52:58.898974  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:52:58.954613  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:52:58.954649  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:52:58.970766  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:52:58.970842  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:52:59.034290  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:52:59.026403    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.026973    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.028473    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.028883    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.030363    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:52:59.026403    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.026973    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.028473    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.028883    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.030363    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:01.534586  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:01.545484  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:01.545555  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:01.572215  604010 cri.go:89] found id: ""
	I1213 11:53:01.572288  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.572302  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:01.572310  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:01.572388  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:01.598159  604010 cri.go:89] found id: ""
	I1213 11:53:01.598188  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.598196  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:01.598203  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:01.598300  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:01.623153  604010 cri.go:89] found id: ""
	I1213 11:53:01.623177  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.623186  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:01.623195  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:01.623261  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:01.649622  604010 cri.go:89] found id: ""
	I1213 11:53:01.649644  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.649652  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:01.649659  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:01.649737  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:01.683094  604010 cri.go:89] found id: ""
	I1213 11:53:01.683119  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.683127  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:01.683133  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:01.683194  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:01.713141  604010 cri.go:89] found id: ""
	I1213 11:53:01.713209  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.713236  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:01.713255  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:01.713329  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:01.743530  604010 cri.go:89] found id: ""
	I1213 11:53:01.743598  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.743644  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:01.743659  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:01.743724  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:01.768540  604010 cri.go:89] found id: ""
	I1213 11:53:01.768567  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.768575  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:01.768585  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:01.768596  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:01.793626  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:01.793664  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:01.820553  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:01.820583  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:01.876734  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:01.876770  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:01.893351  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:01.893425  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:01.982105  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:01.970876    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.971602    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.973230    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.973588    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.977591    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:01.970876    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.971602    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.973230    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.973588    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.977591    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:04.482731  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:04.495226  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:04.495299  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:04.521792  604010 cri.go:89] found id: ""
	I1213 11:53:04.521819  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.521829  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:04.521836  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:04.521900  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:04.553223  604010 cri.go:89] found id: ""
	I1213 11:53:04.553249  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.553258  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:04.553264  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:04.553333  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:04.580024  604010 cri.go:89] found id: ""
	I1213 11:53:04.580049  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.580058  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:04.580064  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:04.580123  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:04.622013  604010 cri.go:89] found id: ""
	I1213 11:53:04.622041  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.622050  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:04.622057  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:04.622117  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:04.646212  604010 cri.go:89] found id: ""
	I1213 11:53:04.646236  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.646245  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:04.646251  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:04.646312  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:04.682129  604010 cri.go:89] found id: ""
	I1213 11:53:04.682156  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.682165  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:04.682171  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:04.682288  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:04.710645  604010 cri.go:89] found id: ""
	I1213 11:53:04.710675  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.710706  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:04.710714  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:04.710781  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:04.742882  604010 cri.go:89] found id: ""
	I1213 11:53:04.742906  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.742915  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:04.742926  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:04.742938  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:04.799010  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:04.799046  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:04.814626  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:04.814655  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:04.884663  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:04.876082    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.876754    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.878443    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.878819    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.880048    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:04.876082    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.876754    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.878443    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.878819    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.880048    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:04.884686  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:04.884717  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:04.910422  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:04.910589  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:05.570211  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:53:05.631760  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:53:05.631794  604010 retry.go:31] will retry after 44.542402529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:53:07.442499  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:07.453537  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:07.453615  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:07.482132  604010 cri.go:89] found id: ""
	I1213 11:53:07.482155  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.482163  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:07.482170  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:07.482229  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:07.506787  604010 cri.go:89] found id: ""
	I1213 11:53:07.506813  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.506823  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:07.506829  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:07.506890  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:07.532425  604010 cri.go:89] found id: ""
	I1213 11:53:07.532449  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.532458  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:07.532465  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:07.532527  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:07.557042  604010 cri.go:89] found id: ""
	I1213 11:53:07.557071  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.557081  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:07.557087  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:07.557147  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:07.581888  604010 cri.go:89] found id: ""
	I1213 11:53:07.581919  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.581934  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:07.581940  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:07.582000  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:07.605619  604010 cri.go:89] found id: ""
	I1213 11:53:07.605646  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.605655  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:07.605661  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:07.605722  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:07.631481  604010 cri.go:89] found id: ""
	I1213 11:53:07.631503  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.631511  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:07.631517  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:07.631574  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:07.656152  604010 cri.go:89] found id: ""
	I1213 11:53:07.656178  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.656187  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:07.656196  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:07.656207  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:07.738199  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:07.729773    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.730173    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.732061    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.732672    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.734342    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:07.729773    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.730173    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.732061    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.732672    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.734342    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:07.738218  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:07.738230  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:07.763561  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:07.763597  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:07.791032  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:07.791059  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:07.846125  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:07.846160  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:10.362523  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:10.372985  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:10.373056  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:10.397984  604010 cri.go:89] found id: ""
	I1213 11:53:10.398016  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.398037  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:10.398044  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:10.398121  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:10.423159  604010 cri.go:89] found id: ""
	I1213 11:53:10.423189  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.423198  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:10.423204  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:10.423266  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:10.447027  604010 cri.go:89] found id: ""
	I1213 11:53:10.447055  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.447064  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:10.447071  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:10.447131  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:10.472026  604010 cri.go:89] found id: ""
	I1213 11:53:10.472049  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.472057  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:10.472064  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:10.472122  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:10.503263  604010 cri.go:89] found id: ""
	I1213 11:53:10.503326  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.503352  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:10.503366  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:10.503440  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:10.532481  604010 cri.go:89] found id: ""
	I1213 11:53:10.532509  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.532518  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:10.532524  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:10.532587  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:10.557219  604010 cri.go:89] found id: ""
	I1213 11:53:10.557258  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.557266  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:10.557273  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:10.557342  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:10.585410  604010 cri.go:89] found id: ""
	I1213 11:53:10.585499  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.585522  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:10.585547  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:10.585587  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:10.611450  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:10.611488  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:10.639926  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:10.639954  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:10.696844  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:10.696881  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:10.713623  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:10.713657  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:10.777642  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:10.768681    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.769607    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.771307    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.771820    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.773703    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:10.768681    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.769607    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.771307    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.771820    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.773703    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:13.278890  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:13.289748  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:13.289817  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:13.317511  604010 cri.go:89] found id: ""
	I1213 11:53:13.317541  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.317550  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:13.317557  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:13.317618  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:13.343404  604010 cri.go:89] found id: ""
	I1213 11:53:13.343432  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.343441  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:13.343448  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:13.343503  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:13.369007  604010 cri.go:89] found id: ""
	I1213 11:53:13.369030  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.369039  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:13.369046  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:13.369108  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:13.395054  604010 cri.go:89] found id: ""
	I1213 11:53:13.395084  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.395094  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:13.395109  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:13.395171  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:13.424003  604010 cri.go:89] found id: ""
	I1213 11:53:13.424030  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.424039  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:13.424046  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:13.424105  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:13.448932  604010 cri.go:89] found id: ""
	I1213 11:53:13.449012  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.449029  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:13.449036  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:13.449112  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:13.474446  604010 cri.go:89] found id: ""
	I1213 11:53:13.474472  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.474481  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:13.474487  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:13.474611  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:13.501117  604010 cri.go:89] found id: ""
	I1213 11:53:13.501141  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.501150  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:13.501159  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:13.501171  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:13.557792  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:13.557829  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:13.574541  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:13.574574  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:13.639676  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:13.629891    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.631830    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.632611    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.634220    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.634886    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:13.629891    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.631830    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.632611    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.634220    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.634886    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:13.639700  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:13.639713  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:13.664830  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:13.664911  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:16.204971  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:16.215560  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:16.215635  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:16.240196  604010 cri.go:89] found id: ""
	I1213 11:53:16.240220  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.240229  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:16.240235  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:16.240293  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:16.265455  604010 cri.go:89] found id: ""
	I1213 11:53:16.265487  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.265497  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:16.265504  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:16.265562  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:16.289852  604010 cri.go:89] found id: ""
	I1213 11:53:16.289875  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.289886  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:16.289893  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:16.289954  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:16.315329  604010 cri.go:89] found id: ""
	I1213 11:53:16.315353  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.315362  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:16.315368  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:16.315433  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:16.346811  604010 cri.go:89] found id: ""
	I1213 11:53:16.346835  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.346844  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:16.346856  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:16.346916  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:16.371580  604010 cri.go:89] found id: ""
	I1213 11:53:16.371608  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.371617  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:16.371623  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:16.371759  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:16.397183  604010 cri.go:89] found id: ""
	I1213 11:53:16.397210  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.397219  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:16.397225  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:16.397286  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:16.422782  604010 cri.go:89] found id: ""
	I1213 11:53:16.422810  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.422821  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:16.422831  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:16.422848  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:16.478667  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:16.478714  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:16.494974  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:16.495011  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:16.560810  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:16.552790    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.553168    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.554711    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.555221    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.556703    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:16.552790    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.553168    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.554711    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.555221    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.556703    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:16.560835  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:16.560849  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:16.586263  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:16.586301  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:19.117851  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:19.128831  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:19.128899  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:19.156507  604010 cri.go:89] found id: ""
	I1213 11:53:19.156537  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.156546  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:19.156553  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:19.156619  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:19.184004  604010 cri.go:89] found id: ""
	I1213 11:53:19.184032  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.184041  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:19.184048  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:19.184108  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:19.210447  604010 cri.go:89] found id: ""
	I1213 11:53:19.210475  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.210485  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:19.210491  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:19.210563  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:19.243214  604010 cri.go:89] found id: ""
	I1213 11:53:19.243241  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.243250  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:19.243257  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:19.243317  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:19.267811  604010 cri.go:89] found id: ""
	I1213 11:53:19.267835  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.267845  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:19.267851  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:19.267912  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:19.291841  604010 cri.go:89] found id: ""
	I1213 11:53:19.291863  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.291872  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:19.291878  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:19.291942  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:19.316863  604010 cri.go:89] found id: ""
	I1213 11:53:19.316890  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.316898  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:19.316904  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:19.316963  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:19.341844  604010 cri.go:89] found id: ""
	I1213 11:53:19.341872  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.341881  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:19.341890  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:19.341901  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:19.397829  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:19.397868  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:19.413720  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:19.413749  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:19.481667  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:19.473280    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.474094    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.475625    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.476130    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.477751    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:19.473280    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.474094    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.475625    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.476130    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.477751    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:19.481694  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:19.481706  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:19.507029  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:19.507069  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:22.036187  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:22.047443  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:22.047516  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:22.073399  604010 cri.go:89] found id: ""
	I1213 11:53:22.073425  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.073433  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:22.073440  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:22.073519  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:22.102458  604010 cri.go:89] found id: ""
	I1213 11:53:22.102483  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.102492  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:22.102499  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:22.102564  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:22.127170  604010 cri.go:89] found id: ""
	I1213 11:53:22.127195  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.127203  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:22.127210  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:22.127270  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:22.152852  604010 cri.go:89] found id: ""
	I1213 11:53:22.152879  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.152887  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:22.152894  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:22.152972  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:22.194915  604010 cri.go:89] found id: ""
	I1213 11:53:22.194939  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.194947  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:22.194985  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:22.195074  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:22.228469  604010 cri.go:89] found id: ""
	I1213 11:53:22.228497  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.228507  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:22.228514  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:22.228574  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:22.257833  604010 cri.go:89] found id: ""
	I1213 11:53:22.257908  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.257931  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:22.257949  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:22.258038  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:22.283351  604010 cri.go:89] found id: ""
	I1213 11:53:22.283375  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.283385  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:22.283394  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:22.283425  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:22.339722  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:22.339759  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:22.358616  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:22.358649  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:22.425578  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:22.417365    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.418082    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.419768    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.420247    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.421786    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:22.417365    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.418082    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.419768    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.420247    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.421786    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:22.425645  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:22.425665  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:22.450867  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:22.450905  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:24.977642  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:24.988556  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:24.988625  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:25.016189  604010 cri.go:89] found id: ""
	I1213 11:53:25.016224  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.016247  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:25.016255  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:25.016320  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:25.044535  604010 cri.go:89] found id: ""
	I1213 11:53:25.044558  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.044567  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:25.044573  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:25.044632  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:25.070715  604010 cri.go:89] found id: ""
	I1213 11:53:25.070743  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.070752  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:25.070759  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:25.070822  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:25.096936  604010 cri.go:89] found id: ""
	I1213 11:53:25.096959  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.096967  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:25.096974  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:25.097035  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:25.122437  604010 cri.go:89] found id: ""
	I1213 11:53:25.122470  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.122480  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:25.122486  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:25.122584  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:25.148962  604010 cri.go:89] found id: ""
	I1213 11:53:25.148988  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.148997  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:25.149003  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:25.149074  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:25.181633  604010 cri.go:89] found id: ""
	I1213 11:53:25.181655  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.181664  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:25.181670  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:25.181732  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:25.212760  604010 cri.go:89] found id: ""
	I1213 11:53:25.212782  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.212790  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:25.212799  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:25.212811  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:25.276581  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:25.268697    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.269118    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.270651    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.271026    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.272496    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:25.268697    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.269118    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.270651    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.271026    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.272496    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:25.276603  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:25.276616  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:25.302726  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:25.302763  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:25.334110  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:25.334183  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:25.390064  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:25.390100  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:26.504848  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:53:26.566930  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:53:26.567035  604010 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 11:53:27.907342  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:27.919244  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:27.919322  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:27.953618  604010 cri.go:89] found id: ""
	I1213 11:53:27.953646  604010 logs.go:282] 0 containers: []
	W1213 11:53:27.953656  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:27.953662  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:27.953732  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:27.983451  604010 cri.go:89] found id: ""
	I1213 11:53:27.983474  604010 logs.go:282] 0 containers: []
	W1213 11:53:27.983483  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:27.983494  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:27.983553  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:28.015089  604010 cri.go:89] found id: ""
	I1213 11:53:28.015124  604010 logs.go:282] 0 containers: []
	W1213 11:53:28.015133  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:28.015141  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:28.015206  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:28.040741  604010 cri.go:89] found id: ""
	I1213 11:53:28.040764  604010 logs.go:282] 0 containers: []
	W1213 11:53:28.040773  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:28.040780  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:28.040847  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:28.066994  604010 cri.go:89] found id: ""
	I1213 11:53:28.067023  604010 logs.go:282] 0 containers: []
	W1213 11:53:28.067032  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:28.067039  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:28.067100  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:28.096788  604010 cri.go:89] found id: ""
	I1213 11:53:28.096819  604010 logs.go:282] 0 containers: []
	W1213 11:53:28.096828  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:28.096835  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:28.096896  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:28.124766  604010 cri.go:89] found id: ""
	I1213 11:53:28.124789  604010 logs.go:282] 0 containers: []
	W1213 11:53:28.124798  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:28.124804  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:28.124873  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:28.159549  604010 cri.go:89] found id: ""
	I1213 11:53:28.159577  604010 logs.go:282] 0 containers: []
	W1213 11:53:28.159585  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:28.159594  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:28.159606  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:28.199573  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:28.199603  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:28.270740  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:28.270789  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:28.287502  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:28.287532  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:28.351364  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:28.343352    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.343924    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.345385    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.345783    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.347266    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:28.343352    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.343924    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.345385    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.345783    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.347266    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:28.351388  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:28.351401  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:30.876922  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:30.887774  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:30.887849  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:30.923850  604010 cri.go:89] found id: ""
	I1213 11:53:30.923878  604010 logs.go:282] 0 containers: []
	W1213 11:53:30.923887  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:30.923893  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:30.923952  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:30.951470  604010 cri.go:89] found id: ""
	I1213 11:53:30.951498  604010 logs.go:282] 0 containers: []
	W1213 11:53:30.951507  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:30.951513  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:30.951570  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:30.984618  604010 cri.go:89] found id: ""
	I1213 11:53:30.984644  604010 logs.go:282] 0 containers: []
	W1213 11:53:30.984653  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:30.984659  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:30.984718  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:31.013958  604010 cri.go:89] found id: ""
	I1213 11:53:31.013986  604010 logs.go:282] 0 containers: []
	W1213 11:53:31.013994  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:31.014001  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:31.014062  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:31.039624  604010 cri.go:89] found id: ""
	I1213 11:53:31.039651  604010 logs.go:282] 0 containers: []
	W1213 11:53:31.039661  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:31.039668  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:31.039735  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:31.065442  604010 cri.go:89] found id: ""
	I1213 11:53:31.065471  604010 logs.go:282] 0 containers: []
	W1213 11:53:31.065480  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:31.065526  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:31.065591  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:31.093987  604010 cri.go:89] found id: ""
	I1213 11:53:31.094012  604010 logs.go:282] 0 containers: []
	W1213 11:53:31.094022  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:31.094028  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:31.094092  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:31.120512  604010 cri.go:89] found id: ""
	I1213 11:53:31.120536  604010 logs.go:282] 0 containers: []
	W1213 11:53:31.120545  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:31.120555  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:31.120568  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:31.193061  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:31.184276    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.185271    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.187099    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.187409    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.188923    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:31.184276    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.185271    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.187099    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.187409    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.188923    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:31.193086  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:31.193099  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:31.222013  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:31.222046  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:31.251352  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:31.251380  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:31.307515  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:31.307558  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:32.782865  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:53:32.843769  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:53:32.843886  604010 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 11:53:33.825081  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:33.836405  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:33.836483  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:33.862074  604010 cri.go:89] found id: ""
	I1213 11:53:33.862097  604010 logs.go:282] 0 containers: []
	W1213 11:53:33.862108  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:33.862114  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:33.862174  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:33.887847  604010 cri.go:89] found id: ""
	I1213 11:53:33.887872  604010 logs.go:282] 0 containers: []
	W1213 11:53:33.887881  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:33.887888  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:33.887953  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:33.922816  604010 cri.go:89] found id: ""
	I1213 11:53:33.922839  604010 logs.go:282] 0 containers: []
	W1213 11:53:33.922847  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:33.922854  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:33.922912  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:33.956255  604010 cri.go:89] found id: ""
	I1213 11:53:33.956278  604010 logs.go:282] 0 containers: []
	W1213 11:53:33.956286  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:33.956296  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:33.956357  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:33.988633  604010 cri.go:89] found id: ""
	I1213 11:53:33.988660  604010 logs.go:282] 0 containers: []
	W1213 11:53:33.988668  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:33.988675  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:33.988734  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:34.016574  604010 cri.go:89] found id: ""
	I1213 11:53:34.016600  604010 logs.go:282] 0 containers: []
	W1213 11:53:34.016610  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:34.016618  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:34.016688  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:34.047246  604010 cri.go:89] found id: ""
	I1213 11:53:34.047274  604010 logs.go:282] 0 containers: []
	W1213 11:53:34.047283  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:34.047290  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:34.047351  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:34.073767  604010 cri.go:89] found id: ""
	I1213 11:53:34.073791  604010 logs.go:282] 0 containers: []
	W1213 11:53:34.073801  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:34.073810  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:34.073821  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:34.142086  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:34.142126  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:34.160135  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:34.160221  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:34.242780  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:34.234520    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.235116    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.236649    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.237063    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.238589    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:34.234520    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.235116    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.236649    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.237063    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.238589    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:34.242803  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:34.242817  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:34.268944  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:34.268981  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:36.800525  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:36.813555  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:36.813631  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:36.838503  604010 cri.go:89] found id: ""
	I1213 11:53:36.838530  604010 logs.go:282] 0 containers: []
	W1213 11:53:36.838539  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:36.838546  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:36.838610  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:36.863532  604010 cri.go:89] found id: ""
	I1213 11:53:36.863553  604010 logs.go:282] 0 containers: []
	W1213 11:53:36.863562  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:36.863569  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:36.863629  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:36.888886  604010 cri.go:89] found id: ""
	I1213 11:53:36.888912  604010 logs.go:282] 0 containers: []
	W1213 11:53:36.888920  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:36.888926  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:36.888992  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:36.917481  604010 cri.go:89] found id: ""
	I1213 11:53:36.917566  604010 logs.go:282] 0 containers: []
	W1213 11:53:36.917589  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:36.917608  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:36.917708  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:36.951605  604010 cri.go:89] found id: ""
	I1213 11:53:36.951676  604010 logs.go:282] 0 containers: []
	W1213 11:53:36.951698  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:36.951716  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:36.951808  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:36.980776  604010 cri.go:89] found id: ""
	I1213 11:53:36.980798  604010 logs.go:282] 0 containers: []
	W1213 11:53:36.980807  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:36.980814  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:36.980878  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:37.014102  604010 cri.go:89] found id: ""
	I1213 11:53:37.014129  604010 logs.go:282] 0 containers: []
	W1213 11:53:37.014139  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:37.014146  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:37.014218  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:37.041045  604010 cri.go:89] found id: ""
	I1213 11:53:37.041068  604010 logs.go:282] 0 containers: []
	W1213 11:53:37.041076  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:37.041086  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:37.041099  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:37.057607  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:37.057677  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:37.123513  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:37.114613    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.115389    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.117143    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.117811    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.119588    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:37.114613    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.115389    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.117143    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.117811    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.119588    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:37.123585  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:37.123612  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:37.149745  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:37.149782  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:37.190123  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:37.190160  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:39.753400  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:39.766329  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:39.766428  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:39.794895  604010 cri.go:89] found id: ""
	I1213 11:53:39.794979  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.794995  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:39.795003  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:39.795077  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:39.819418  604010 cri.go:89] found id: ""
	I1213 11:53:39.819444  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.819453  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:39.819462  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:39.819522  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:39.847949  604010 cri.go:89] found id: ""
	I1213 11:53:39.847976  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.847985  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:39.847992  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:39.848064  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:39.872978  604010 cri.go:89] found id: ""
	I1213 11:53:39.873009  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.873018  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:39.873025  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:39.873091  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:39.900210  604010 cri.go:89] found id: ""
	I1213 11:53:39.900236  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.900245  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:39.900252  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:39.900311  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:39.934251  604010 cri.go:89] found id: ""
	I1213 11:53:39.934276  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.934285  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:39.934291  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:39.934351  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:39.964389  604010 cri.go:89] found id: ""
	I1213 11:53:39.964416  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.964425  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:39.964431  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:39.964496  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:39.995412  604010 cri.go:89] found id: ""
	I1213 11:53:39.995435  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.995444  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:39.995454  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:39.995466  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:40.074600  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:40.074644  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:40.093065  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:40.093143  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:40.162566  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:40.153392    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.154048    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.155849    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.156585    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.158356    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:40.153392    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.154048    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.155849    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.156585    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.158356    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:40.162633  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:40.162659  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:40.191469  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:40.191548  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:42.738325  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:42.749369  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:42.749435  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:42.776660  604010 cri.go:89] found id: ""
	I1213 11:53:42.776686  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.776695  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:42.776701  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:42.776761  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:42.802014  604010 cri.go:89] found id: ""
	I1213 11:53:42.802042  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.802051  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:42.802057  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:42.802116  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:42.826554  604010 cri.go:89] found id: ""
	I1213 11:53:42.826583  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.826592  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:42.826598  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:42.826659  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:42.853269  604010 cri.go:89] found id: ""
	I1213 11:53:42.853296  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.853305  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:42.853319  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:42.853384  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:42.880122  604010 cri.go:89] found id: ""
	I1213 11:53:42.880150  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.880159  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:42.880166  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:42.880227  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:42.904811  604010 cri.go:89] found id: ""
	I1213 11:53:42.904834  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.904843  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:42.904850  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:42.904908  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:42.930715  604010 cri.go:89] found id: ""
	I1213 11:53:42.930744  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.930753  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:42.930759  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:42.930815  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:42.964092  604010 cri.go:89] found id: ""
	I1213 11:53:42.964115  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.964123  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:42.964132  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:42.964144  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:42.994219  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:42.994254  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:43.031007  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:43.031036  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:43.086377  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:43.086412  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:43.103185  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:43.103216  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:43.180526  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:43.171640    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.172414    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.174057    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.174649    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.176278    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:43.171640    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.172414    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.174057    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.174649    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.176278    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:45.681512  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:45.691980  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:45.692050  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:45.720468  604010 cri.go:89] found id: ""
	I1213 11:53:45.720494  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.720503  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:45.720509  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:45.720566  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:45.745270  604010 cri.go:89] found id: ""
	I1213 11:53:45.745297  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.745305  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:45.745312  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:45.745371  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:45.771959  604010 cri.go:89] found id: ""
	I1213 11:53:45.771989  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.771998  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:45.772005  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:45.772063  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:45.797561  604010 cri.go:89] found id: ""
	I1213 11:53:45.797588  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.797597  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:45.797604  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:45.797666  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:45.821937  604010 cri.go:89] found id: ""
	I1213 11:53:45.821965  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.821975  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:45.821981  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:45.822041  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:45.854390  604010 cri.go:89] found id: ""
	I1213 11:53:45.854414  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.854423  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:45.854430  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:45.854489  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:45.879570  604010 cri.go:89] found id: ""
	I1213 11:53:45.879597  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.879616  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:45.879623  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:45.879681  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:45.904307  604010 cri.go:89] found id: ""
	I1213 11:53:45.904335  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.904344  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:45.904354  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:45.904364  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:45.971467  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:45.971554  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:45.988842  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:45.988868  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:46.054484  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:46.046672    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.047076    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.048668    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.049161    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.050614    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:46.046672    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.047076    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.048668    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.049161    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.050614    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:46.054553  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:46.054579  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:46.079997  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:46.080032  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:48.608207  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:48.618848  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:48.618926  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:48.644320  604010 cri.go:89] found id: ""
	I1213 11:53:48.644344  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.644352  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:48.644359  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:48.644420  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:48.669194  604010 cri.go:89] found id: ""
	I1213 11:53:48.669226  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.669236  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:48.669242  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:48.669308  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:48.694072  604010 cri.go:89] found id: ""
	I1213 11:53:48.694097  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.694107  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:48.694113  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:48.694188  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:48.718654  604010 cri.go:89] found id: ""
	I1213 11:53:48.718679  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.718720  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:48.718727  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:48.718800  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:48.742539  604010 cri.go:89] found id: ""
	I1213 11:53:48.742571  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.742580  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:48.742587  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:48.742660  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:48.771087  604010 cri.go:89] found id: ""
	I1213 11:53:48.771111  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.771120  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:48.771126  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:48.771185  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:48.797732  604010 cri.go:89] found id: ""
	I1213 11:53:48.797755  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.797764  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:48.797770  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:48.797834  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:48.822805  604010 cri.go:89] found id: ""
	I1213 11:53:48.822830  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.822839  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:48.822849  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:48.822860  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:48.879446  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:48.879514  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:48.895910  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:48.895938  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:48.987206  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:48.978941    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.979739    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.981488    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.981826    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.983267    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:48.978941    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.979739    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.981488    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.981826    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.983267    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:48.987238  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:48.987251  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:49.014114  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:49.014150  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:50.175475  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:53:50.239481  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:53:50.239579  604010 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 11:53:50.242787  604010 out.go:179] * Enabled addons: 
	I1213 11:53:50.245448  604010 addons.go:530] duration metric: took 1m54.618181483s for enable addons: enabled=[]
	I1213 11:53:51.543477  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:51.554449  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:51.554521  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:51.579307  604010 cri.go:89] found id: ""
	I1213 11:53:51.579335  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.579344  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:51.579350  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:51.579411  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:51.605002  604010 cri.go:89] found id: ""
	I1213 11:53:51.605029  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.605040  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:51.605047  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:51.605108  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:51.629728  604010 cri.go:89] found id: ""
	I1213 11:53:51.629761  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.629770  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:51.629777  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:51.629840  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:51.656823  604010 cri.go:89] found id: ""
	I1213 11:53:51.656846  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.656855  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:51.656862  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:51.656919  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:51.684689  604010 cri.go:89] found id: ""
	I1213 11:53:51.684712  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.684721  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:51.684728  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:51.684787  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:51.709741  604010 cri.go:89] found id: ""
	I1213 11:53:51.709768  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.709776  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:51.709784  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:51.709895  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:51.735821  604010 cri.go:89] found id: ""
	I1213 11:53:51.735848  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.735857  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:51.735863  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:51.735922  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:51.765085  604010 cri.go:89] found id: ""
	I1213 11:53:51.765111  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.765120  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:51.765130  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:51.765143  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:51.820951  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:51.820986  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:51.837298  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:51.837448  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:51.903778  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:51.894875    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.895698    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.897404    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.897825    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.899293    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:51.894875    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.895698    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.897404    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.897825    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.899293    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:51.903855  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:51.903876  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:51.931477  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:51.931561  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:54.461061  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:54.471768  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:54.471839  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:54.497629  604010 cri.go:89] found id: ""
	I1213 11:53:54.497651  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.497660  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:54.497666  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:54.497725  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:54.523805  604010 cri.go:89] found id: ""
	I1213 11:53:54.523830  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.523839  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:54.523846  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:54.523905  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:54.548988  604010 cri.go:89] found id: ""
	I1213 11:53:54.549012  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.549021  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:54.549027  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:54.549089  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:54.584912  604010 cri.go:89] found id: ""
	I1213 11:53:54.584996  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.585012  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:54.585020  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:54.585094  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:54.613768  604010 cri.go:89] found id: ""
	I1213 11:53:54.613810  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.613822  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:54.613832  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:54.613917  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:54.638498  604010 cri.go:89] found id: ""
	I1213 11:53:54.638523  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.638531  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:54.638539  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:54.638597  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:54.663796  604010 cri.go:89] found id: ""
	I1213 11:53:54.663863  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.663886  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:54.663904  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:54.663994  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:54.688512  604010 cri.go:89] found id: ""
	I1213 11:53:54.688595  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.688612  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:54.688623  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:54.688635  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:54.745122  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:54.745158  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:54.761471  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:54.761502  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:54.827485  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:54.818964    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.819562    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.821065    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.821615    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.823257    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:54.818964    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.819562    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.821065    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.821615    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.823257    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:54.827506  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:54.827519  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:54.853348  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:54.853383  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:57.386439  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:57.396996  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:57.397067  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:57.432425  604010 cri.go:89] found id: ""
	I1213 11:53:57.432451  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.432461  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:57.432468  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:57.432531  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:57.468740  604010 cri.go:89] found id: ""
	I1213 11:53:57.468767  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.468777  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:57.468783  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:57.468848  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:57.496008  604010 cri.go:89] found id: ""
	I1213 11:53:57.496032  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.496041  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:57.496053  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:57.496113  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:57.522430  604010 cri.go:89] found id: ""
	I1213 11:53:57.522454  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.522463  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:57.522469  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:57.522528  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:57.547956  604010 cri.go:89] found id: ""
	I1213 11:53:57.547980  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.547988  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:57.547994  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:57.548054  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:57.573554  604010 cri.go:89] found id: ""
	I1213 11:53:57.573579  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.573589  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:57.573596  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:57.573658  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:57.597400  604010 cri.go:89] found id: ""
	I1213 11:53:57.597428  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.597437  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:57.597443  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:57.597501  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:57.621599  604010 cri.go:89] found id: ""
	I1213 11:53:57.621623  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.621632  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:57.621642  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:57.621653  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:57.677116  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:57.677153  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:57.692856  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:57.692929  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:57.758229  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:57.748721    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.749368    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.751042    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.751857    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.753632    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:57.748721    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.749368    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.751042    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.751857    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.753632    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:57.758252  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:57.758266  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:57.784520  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:57.784560  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:00.317292  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:00.352525  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:00.352620  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:00.392603  604010 cri.go:89] found id: ""
	I1213 11:54:00.392636  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.392646  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:00.392654  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:00.392736  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:00.447117  604010 cri.go:89] found id: ""
	I1213 11:54:00.447149  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.447158  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:00.447178  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:00.447281  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:00.479294  604010 cri.go:89] found id: ""
	I1213 11:54:00.479324  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.479333  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:00.479339  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:00.479406  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:00.510064  604010 cri.go:89] found id: ""
	I1213 11:54:00.510092  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.510101  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:00.510108  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:00.510184  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:00.537774  604010 cri.go:89] found id: ""
	I1213 11:54:00.537801  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.537810  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:00.537816  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:00.537877  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:00.563430  604010 cri.go:89] found id: ""
	I1213 11:54:00.563460  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.563469  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:00.563475  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:00.563534  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:00.588470  604010 cri.go:89] found id: ""
	I1213 11:54:00.588495  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.588503  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:00.588510  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:00.588573  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:00.616819  604010 cri.go:89] found id: ""
	I1213 11:54:00.616853  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.616865  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:00.616874  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:00.616887  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:00.632810  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:00.632837  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:00.697200  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:00.688095    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.688902    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.690382    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.690873    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.692718    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:00.688095    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.688902    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.690382    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.690873    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.692718    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:00.697225  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:00.697239  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:00.722351  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:00.722391  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:00.753453  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:00.753489  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:03.309839  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:03.321093  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:03.321163  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:03.349567  604010 cri.go:89] found id: ""
	I1213 11:54:03.349591  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.349600  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:03.349607  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:03.349667  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:03.374734  604010 cri.go:89] found id: ""
	I1213 11:54:03.374758  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.374767  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:03.374774  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:03.374842  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:03.400074  604010 cri.go:89] found id: ""
	I1213 11:54:03.400099  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.400108  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:03.400114  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:03.400172  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:03.461432  604010 cri.go:89] found id: ""
	I1213 11:54:03.461533  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.461561  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:03.461583  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:03.461673  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:03.504466  604010 cri.go:89] found id: ""
	I1213 11:54:03.504544  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.504566  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:03.504585  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:03.504671  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:03.545459  604010 cri.go:89] found id: ""
	I1213 11:54:03.545482  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.545491  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:03.545497  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:03.545575  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:03.570446  604010 cri.go:89] found id: ""
	I1213 11:54:03.570468  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.570476  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:03.570482  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:03.570539  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:03.595001  604010 cri.go:89] found id: ""
	I1213 11:54:03.595023  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.595031  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:03.595041  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:03.595057  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:03.610922  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:03.610955  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:03.679130  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:03.671134    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.671746    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.673204    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.673644    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.675078    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:03.671134    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.671746    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.673204    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.673644    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.675078    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:03.679152  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:03.679167  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:03.705484  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:03.705522  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:03.732753  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:03.732778  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:06.289051  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:06.299935  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:06.300031  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:06.325745  604010 cri.go:89] found id: ""
	I1213 11:54:06.325777  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.325787  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:06.325794  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:06.325898  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:06.352273  604010 cri.go:89] found id: ""
	I1213 11:54:06.352342  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.352357  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:06.352365  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:06.352437  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:06.376413  604010 cri.go:89] found id: ""
	I1213 11:54:06.376482  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.376507  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:06.376520  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:06.376596  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:06.406144  604010 cri.go:89] found id: ""
	I1213 11:54:06.406188  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.406198  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:06.406206  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:06.406285  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:06.456311  604010 cri.go:89] found id: ""
	I1213 11:54:06.456388  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.456411  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:06.456430  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:06.456526  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:06.510060  604010 cri.go:89] found id: ""
	I1213 11:54:06.510150  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.510174  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:06.510194  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:06.510310  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:06.542373  604010 cri.go:89] found id: ""
	I1213 11:54:06.542450  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.542472  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:06.542494  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:06.542601  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:06.567983  604010 cri.go:89] found id: ""
	I1213 11:54:06.568063  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.568087  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:06.568104  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:06.568129  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:06.624463  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:06.624498  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:06.640970  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:06.641003  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:06.714019  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:06.704918    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.705767    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.706758    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.708430    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.708734    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:06.704918    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.705767    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.706758    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.708430    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.708734    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:06.714096  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:06.714117  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:06.739708  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:06.739748  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:09.268501  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:09.279334  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:09.279413  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:09.308998  604010 cri.go:89] found id: ""
	I1213 11:54:09.309034  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.309043  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:09.309050  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:09.309110  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:09.336921  604010 cri.go:89] found id: ""
	I1213 11:54:09.336947  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.336956  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:09.336963  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:09.337025  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:09.367100  604010 cri.go:89] found id: ""
	I1213 11:54:09.367123  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.367131  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:09.367138  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:09.367196  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:09.392881  604010 cri.go:89] found id: ""
	I1213 11:54:09.392913  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.392922  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:09.392930  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:09.392991  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:09.433300  604010 cri.go:89] found id: ""
	I1213 11:54:09.433330  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.433339  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:09.433345  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:09.433408  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:09.499329  604010 cri.go:89] found id: ""
	I1213 11:54:09.499357  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.499365  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:09.499372  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:09.499434  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:09.526348  604010 cri.go:89] found id: ""
	I1213 11:54:09.526383  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.526392  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:09.526399  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:09.526467  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:09.551552  604010 cri.go:89] found id: ""
	I1213 11:54:09.551585  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.551595  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:09.551605  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:09.551617  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:09.607976  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:09.608011  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:09.624198  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:09.624228  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:09.692042  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:09.683184    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.683833    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.685650    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.686276    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.688111    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:09.683184    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.683833    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.685650    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.686276    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.688111    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:09.692065  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:09.692077  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:09.717762  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:09.717799  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:12.251306  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:12.261889  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:12.261958  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:12.286128  604010 cri.go:89] found id: ""
	I1213 11:54:12.286151  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.286160  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:12.286166  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:12.286231  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:12.320955  604010 cri.go:89] found id: ""
	I1213 11:54:12.320982  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.320992  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:12.320999  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:12.321064  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:12.347366  604010 cri.go:89] found id: ""
	I1213 11:54:12.347394  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.347404  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:12.347411  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:12.347475  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:12.372047  604010 cri.go:89] found id: ""
	I1213 11:54:12.372075  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.372084  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:12.372091  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:12.372211  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:12.397441  604010 cri.go:89] found id: ""
	I1213 11:54:12.397466  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.397475  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:12.397482  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:12.397610  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:12.458383  604010 cri.go:89] found id: ""
	I1213 11:54:12.458464  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.458487  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:12.458505  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:12.458610  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:12.499011  604010 cri.go:89] found id: ""
	I1213 11:54:12.499087  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.499110  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:12.499128  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:12.499223  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:12.526019  604010 cri.go:89] found id: ""
	I1213 11:54:12.526048  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.526058  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:12.526068  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:12.526079  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:12.582388  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:12.582425  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:12.598760  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:12.598788  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:12.668226  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:12.659694    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.660116    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.661902    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.662352    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.663961    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:12.659694    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.660116    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.661902    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.662352    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.663961    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:12.668250  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:12.668263  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:12.698476  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:12.698514  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:15.226309  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:15.237066  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:15.237138  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:15.261808  604010 cri.go:89] found id: ""
	I1213 11:54:15.261836  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.261845  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:15.261851  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:15.261912  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:15.286942  604010 cri.go:89] found id: ""
	I1213 11:54:15.286966  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.286975  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:15.286981  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:15.287066  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:15.311813  604010 cri.go:89] found id: ""
	I1213 11:54:15.311842  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.311852  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:15.311859  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:15.311920  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:15.341088  604010 cri.go:89] found id: ""
	I1213 11:54:15.341116  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.341124  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:15.341131  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:15.341188  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:15.365220  604010 cri.go:89] found id: ""
	I1213 11:54:15.365247  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.365256  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:15.365263  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:15.365319  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:15.389056  604010 cri.go:89] found id: ""
	I1213 11:54:15.389084  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.389093  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:15.389099  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:15.389159  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:15.424168  604010 cri.go:89] found id: ""
	I1213 11:54:15.424197  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.424206  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:15.424215  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:15.424275  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:15.458977  604010 cri.go:89] found id: ""
	I1213 11:54:15.459014  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.459023  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:15.459033  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:15.459045  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:15.488624  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:15.488665  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:15.534272  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:15.534300  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:15.593055  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:15.593092  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:15.609340  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:15.609370  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:15.673503  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:15.664722    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.665497    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.667260    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.667958    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.669529    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:15.664722    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.665497    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.667260    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.667958    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.669529    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:18.175202  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:18.185611  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:18.185684  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:18.216571  604010 cri.go:89] found id: ""
	I1213 11:54:18.216598  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.216609  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:18.216616  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:18.216676  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:18.244020  604010 cri.go:89] found id: ""
	I1213 11:54:18.244044  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.244053  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:18.244060  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:18.244125  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:18.269644  604010 cri.go:89] found id: ""
	I1213 11:54:18.269677  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.269686  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:18.269699  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:18.269759  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:18.295049  604010 cri.go:89] found id: ""
	I1213 11:54:18.295074  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.295084  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:18.295092  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:18.295151  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:18.319970  604010 cri.go:89] found id: ""
	I1213 11:54:18.319994  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.320003  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:18.320009  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:18.320068  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:18.348557  604010 cri.go:89] found id: ""
	I1213 11:54:18.348583  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.348591  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:18.348598  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:18.348661  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:18.372733  604010 cri.go:89] found id: ""
	I1213 11:54:18.372759  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.372769  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:18.372775  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:18.372833  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:18.397904  604010 cri.go:89] found id: ""
	I1213 11:54:18.397927  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.397936  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:18.397945  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:18.397958  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:18.475145  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:18.475177  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:18.509115  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:18.509140  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:18.578046  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:18.568558    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.569407    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.571224    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.571849    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.573663    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:18.568558    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.569407    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.571224    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.571849    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.573663    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:18.578069  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:18.578080  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:18.604022  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:18.604057  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:21.135717  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:21.151653  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:21.151722  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:21.181267  604010 cri.go:89] found id: ""
	I1213 11:54:21.181292  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.181300  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:21.181306  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:21.181363  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:21.211036  604010 cri.go:89] found id: ""
	I1213 11:54:21.211064  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.211073  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:21.211079  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:21.211136  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:21.235057  604010 cri.go:89] found id: ""
	I1213 11:54:21.235082  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.235091  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:21.235097  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:21.235158  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:21.259604  604010 cri.go:89] found id: ""
	I1213 11:54:21.259629  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.259637  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:21.259644  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:21.259710  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:21.284921  604010 cri.go:89] found id: ""
	I1213 11:54:21.284948  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.284957  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:21.284963  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:21.285022  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:21.311134  604010 cri.go:89] found id: ""
	I1213 11:54:21.311162  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.311171  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:21.311178  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:21.311238  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:21.337100  604010 cri.go:89] found id: ""
	I1213 11:54:21.337124  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.337133  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:21.337140  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:21.337201  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:21.361945  604010 cri.go:89] found id: ""
	I1213 11:54:21.361969  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.361977  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:21.361987  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:21.362001  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:21.424925  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:21.424964  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:21.442370  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:21.442449  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:21.544421  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:21.527951    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.529143    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.530038    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.535082    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.535420    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:21.527951    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.529143    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.530038    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.535082    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.535420    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:21.544487  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:21.544508  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:21.569861  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:21.569899  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:24.098574  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:24.109255  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:24.109328  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:24.135881  604010 cri.go:89] found id: ""
	I1213 11:54:24.135904  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.135913  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:24.135919  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:24.135976  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:24.160249  604010 cri.go:89] found id: ""
	I1213 11:54:24.160272  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.160281  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:24.160294  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:24.160356  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:24.185097  604010 cri.go:89] found id: ""
	I1213 11:54:24.185120  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.185129  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:24.185136  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:24.185197  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:24.210052  604010 cri.go:89] found id: ""
	I1213 11:54:24.210133  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.210156  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:24.210174  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:24.210263  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:24.234868  604010 cri.go:89] found id: ""
	I1213 11:54:24.234895  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.234905  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:24.234912  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:24.234968  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:24.258998  604010 cri.go:89] found id: ""
	I1213 11:54:24.259023  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.259032  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:24.259039  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:24.259099  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:24.282644  604010 cri.go:89] found id: ""
	I1213 11:54:24.282672  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.282713  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:24.282721  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:24.282780  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:24.312793  604010 cri.go:89] found id: ""
	I1213 11:54:24.312822  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.312831  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:24.312841  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:24.312853  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:24.328614  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:24.328643  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:24.398953  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:24.390748    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.391466    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.392548    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.393304    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.394893    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:24.390748    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.391466    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.392548    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.393304    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.394893    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:24.398978  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:24.398992  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:24.447276  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:24.447353  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:24.512358  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:24.512384  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:27.079756  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:27.090085  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:27.090157  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:27.114934  604010 cri.go:89] found id: ""
	I1213 11:54:27.114957  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.114966  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:27.114972  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:27.115032  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:27.139399  604010 cri.go:89] found id: ""
	I1213 11:54:27.139424  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.139433  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:27.139439  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:27.139496  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:27.164348  604010 cri.go:89] found id: ""
	I1213 11:54:27.164371  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.164379  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:27.164385  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:27.164443  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:27.189263  604010 cri.go:89] found id: ""
	I1213 11:54:27.189286  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.189294  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:27.189302  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:27.189362  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:27.214003  604010 cri.go:89] found id: ""
	I1213 11:54:27.214076  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.214101  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:27.214121  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:27.214204  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:27.238568  604010 cri.go:89] found id: ""
	I1213 11:54:27.238632  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.238657  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:27.238675  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:27.238861  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:27.263827  604010 cri.go:89] found id: ""
	I1213 11:54:27.263850  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.263858  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:27.263864  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:27.263941  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:27.293643  604010 cri.go:89] found id: ""
	I1213 11:54:27.293672  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.293680  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:27.293691  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:27.293706  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:27.353462  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:27.353498  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:27.369639  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:27.369723  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:27.462957  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:27.448639    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.449130    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.455578    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.456379    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.459064    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:27.448639    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.449130    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.455578    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.456379    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.459064    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:27.462984  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:27.463007  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:27.502080  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:27.502115  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:30.033979  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:30.048817  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:30.048921  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:30.086312  604010 cri.go:89] found id: ""
	I1213 11:54:30.086343  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.086353  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:30.086361  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:30.086431  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:30.118027  604010 cri.go:89] found id: ""
	I1213 11:54:30.118056  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.118066  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:30.118073  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:30.118139  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:30.150398  604010 cri.go:89] found id: ""
	I1213 11:54:30.150422  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.150431  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:30.150437  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:30.150501  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:30.176994  604010 cri.go:89] found id: ""
	I1213 11:54:30.177024  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.177033  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:30.177040  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:30.177102  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:30.204667  604010 cri.go:89] found id: ""
	I1213 11:54:30.204692  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.204702  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:30.204709  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:30.204768  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:30.233311  604010 cri.go:89] found id: ""
	I1213 11:54:30.233340  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.233350  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:30.233357  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:30.233443  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:30.258722  604010 cri.go:89] found id: ""
	I1213 11:54:30.258749  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.258759  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:30.258766  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:30.258828  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:30.284738  604010 cri.go:89] found id: ""
	I1213 11:54:30.284766  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.284775  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:30.284785  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:30.284797  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:30.352842  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:30.344108    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.344689    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.346232    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.346735    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.348264    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:30.344108    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.344689    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.346232    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.346735    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.348264    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:30.352861  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:30.352873  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:30.377958  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:30.377993  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:30.409746  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:30.409777  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:30.497989  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:30.498042  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:33.019623  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:33.030945  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:33.031018  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:33.060411  604010 cri.go:89] found id: ""
	I1213 11:54:33.060436  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.060445  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:33.060452  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:33.060514  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:33.085659  604010 cri.go:89] found id: ""
	I1213 11:54:33.085684  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.085693  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:33.085700  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:33.085762  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:33.110577  604010 cri.go:89] found id: ""
	I1213 11:54:33.110603  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.110612  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:33.110618  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:33.110676  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:33.140224  604010 cri.go:89] found id: ""
	I1213 11:54:33.140252  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.140261  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:33.140267  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:33.140328  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:33.165441  604010 cri.go:89] found id: ""
	I1213 11:54:33.165467  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.165477  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:33.165483  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:33.165574  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:33.191299  604010 cri.go:89] found id: ""
	I1213 11:54:33.191324  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.191332  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:33.191339  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:33.191400  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:33.216285  604010 cri.go:89] found id: ""
	I1213 11:54:33.216311  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.216320  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:33.216327  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:33.216386  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:33.241156  604010 cri.go:89] found id: ""
	I1213 11:54:33.241180  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.241189  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:33.241199  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:33.241210  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:33.269984  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:33.270014  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:33.326746  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:33.326782  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:33.343845  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:33.343874  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:33.421478  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:33.403624    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.404936    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.405920    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.407713    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.408279    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:33.403624    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.404936    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.405920    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.407713    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.408279    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:33.421564  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:33.421594  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:35.956688  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:35.967776  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:35.967847  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:35.992715  604010 cri.go:89] found id: ""
	I1213 11:54:35.992745  604010 logs.go:282] 0 containers: []
	W1213 11:54:35.992753  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:35.992760  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:35.992821  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:36.030819  604010 cri.go:89] found id: ""
	I1213 11:54:36.030854  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.030864  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:36.030870  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:36.030940  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:36.056512  604010 cri.go:89] found id: ""
	I1213 11:54:36.056537  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.056547  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:36.056553  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:36.056613  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:36.083355  604010 cri.go:89] found id: ""
	I1213 11:54:36.083381  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.083390  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:36.083397  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:36.083458  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:36.109765  604010 cri.go:89] found id: ""
	I1213 11:54:36.109791  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.109799  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:36.109806  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:36.109866  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:36.139001  604010 cri.go:89] found id: ""
	I1213 11:54:36.139030  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.139040  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:36.139048  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:36.139109  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:36.164252  604010 cri.go:89] found id: ""
	I1213 11:54:36.164280  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.164290  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:36.164297  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:36.164419  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:36.193554  604010 cri.go:89] found id: ""
	I1213 11:54:36.193579  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.193588  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:36.193597  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:36.193609  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:36.225514  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:36.225555  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:36.284505  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:36.284551  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:36.300602  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:36.300632  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:36.368620  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:36.358956    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.360036    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.361784    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.362389    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.364078    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:36.358956    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.360036    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.361784    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.362389    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.364078    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:36.368642  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:36.368654  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:38.894313  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:38.906401  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:38.906478  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:38.931173  604010 cri.go:89] found id: ""
	I1213 11:54:38.931200  604010 logs.go:282] 0 containers: []
	W1213 11:54:38.931210  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:38.931217  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:38.931280  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:38.957289  604010 cri.go:89] found id: ""
	I1213 11:54:38.957315  604010 logs.go:282] 0 containers: []
	W1213 11:54:38.957324  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:38.957330  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:38.957391  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:38.984282  604010 cri.go:89] found id: ""
	I1213 11:54:38.984307  604010 logs.go:282] 0 containers: []
	W1213 11:54:38.984317  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:38.984323  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:38.984402  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:39.012924  604010 cri.go:89] found id: ""
	I1213 11:54:39.012994  604010 logs.go:282] 0 containers: []
	W1213 11:54:39.013012  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:39.013021  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:39.013085  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:39.039025  604010 cri.go:89] found id: ""
	I1213 11:54:39.039062  604010 logs.go:282] 0 containers: []
	W1213 11:54:39.039071  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:39.039077  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:39.039145  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:39.066984  604010 cri.go:89] found id: ""
	I1213 11:54:39.067009  604010 logs.go:282] 0 containers: []
	W1213 11:54:39.067018  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:39.067024  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:39.067088  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:39.093147  604010 cri.go:89] found id: ""
	I1213 11:54:39.093172  604010 logs.go:282] 0 containers: []
	W1213 11:54:39.093181  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:39.093188  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:39.093247  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:39.120841  604010 cri.go:89] found id: ""
	I1213 11:54:39.120866  604010 logs.go:282] 0 containers: []
	W1213 11:54:39.120875  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:39.120884  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:39.120896  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:39.177077  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:39.177113  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:39.193258  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:39.193284  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:39.255506  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:39.246949    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.247600    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.249297    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.249837    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.251408    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:39.246949    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.247600    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.249297    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.249837    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.251408    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:39.255531  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:39.255546  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:39.280959  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:39.280995  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:41.808371  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:41.820751  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:41.820829  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:41.847226  604010 cri.go:89] found id: ""
	I1213 11:54:41.847249  604010 logs.go:282] 0 containers: []
	W1213 11:54:41.847258  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:41.847264  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:41.847322  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:41.873405  604010 cri.go:89] found id: ""
	I1213 11:54:41.873436  604010 logs.go:282] 0 containers: []
	W1213 11:54:41.873448  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:41.873455  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:41.873519  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:41.899479  604010 cri.go:89] found id: ""
	I1213 11:54:41.899509  604010 logs.go:282] 0 containers: []
	W1213 11:54:41.899518  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:41.899524  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:41.899582  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:41.923515  604010 cri.go:89] found id: ""
	I1213 11:54:41.923545  604010 logs.go:282] 0 containers: []
	W1213 11:54:41.923554  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:41.923561  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:41.923621  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:41.952086  604010 cri.go:89] found id: ""
	I1213 11:54:41.952110  604010 logs.go:282] 0 containers: []
	W1213 11:54:41.952119  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:41.952125  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:41.952182  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:41.976613  604010 cri.go:89] found id: ""
	I1213 11:54:41.976637  604010 logs.go:282] 0 containers: []
	W1213 11:54:41.976646  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:41.976653  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:41.976714  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:42.010402  604010 cri.go:89] found id: ""
	I1213 11:54:42.010434  604010 logs.go:282] 0 containers: []
	W1213 11:54:42.010443  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:42.010450  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:42.010520  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:42.038928  604010 cri.go:89] found id: ""
	I1213 11:54:42.038955  604010 logs.go:282] 0 containers: []
	W1213 11:54:42.038964  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:42.038974  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:42.038985  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:42.096963  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:42.097004  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:42.115172  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:42.115213  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:42.192959  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:42.182320    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.183391    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.184373    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.186141    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.186781    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:42.182320    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.183391    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.184373    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.186141    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.186781    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:42.192981  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:42.192995  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:42.219986  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:42.220023  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:44.750998  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:44.761521  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:44.761601  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:44.785581  604010 cri.go:89] found id: ""
	I1213 11:54:44.785609  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.785618  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:44.785625  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:44.785681  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:44.810312  604010 cri.go:89] found id: ""
	I1213 11:54:44.810340  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.810349  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:44.810356  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:44.810419  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:44.834980  604010 cri.go:89] found id: ""
	I1213 11:54:44.835004  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.835012  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:44.835018  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:44.835082  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:44.868160  604010 cri.go:89] found id: ""
	I1213 11:54:44.868187  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.868196  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:44.868203  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:44.868263  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:44.893689  604010 cri.go:89] found id: ""
	I1213 11:54:44.893715  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.893723  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:44.893730  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:44.893788  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:44.918090  604010 cri.go:89] found id: ""
	I1213 11:54:44.918119  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.918128  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:44.918135  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:44.918196  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:44.944994  604010 cri.go:89] found id: ""
	I1213 11:54:44.945022  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.945032  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:44.945038  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:44.945102  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:44.969862  604010 cri.go:89] found id: ""
	I1213 11:54:44.969891  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.969900  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:44.969910  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:44.969921  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:45.027468  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:45.027521  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:45.054117  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:45.054213  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:45.178092  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:45.159739    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.160529    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.166319    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.166867    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.169009    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:45.159739    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.160529    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.166319    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.166867    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.169009    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:45.178126  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:45.178168  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:45.209407  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:45.209462  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:47.757891  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:47.768440  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:47.768511  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:47.797232  604010 cri.go:89] found id: ""
	I1213 11:54:47.797258  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.797267  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:47.797274  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:47.797331  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:47.822035  604010 cri.go:89] found id: ""
	I1213 11:54:47.822059  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.822068  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:47.822074  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:47.822139  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:47.850594  604010 cri.go:89] found id: ""
	I1213 11:54:47.850619  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.850627  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:47.850634  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:47.850715  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:47.875934  604010 cri.go:89] found id: ""
	I1213 11:54:47.875958  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.875967  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:47.875975  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:47.876036  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:47.904019  604010 cri.go:89] found id: ""
	I1213 11:54:47.904043  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.904051  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:47.904058  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:47.904122  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:47.928717  604010 cri.go:89] found id: ""
	I1213 11:54:47.928743  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.928751  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:47.928758  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:47.928818  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:47.953107  604010 cri.go:89] found id: ""
	I1213 11:54:47.953135  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.953144  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:47.953152  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:47.953228  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:47.977855  604010 cri.go:89] found id: ""
	I1213 11:54:47.977891  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.977900  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:47.977910  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:47.977940  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:48.033045  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:48.033085  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:48.049516  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:48.049571  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:48.119802  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:48.111384    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.112145    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.113839    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.114220    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.115737    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:48.111384    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.112145    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.113839    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.114220    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.115737    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:48.119824  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:48.119837  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:48.144575  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:48.144606  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:50.674890  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:50.689012  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:50.689130  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:50.747025  604010 cri.go:89] found id: ""
	I1213 11:54:50.747102  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.747125  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:50.747143  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:50.747232  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:50.775729  604010 cri.go:89] found id: ""
	I1213 11:54:50.775795  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.775812  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:50.775820  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:50.775887  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:50.799251  604010 cri.go:89] found id: ""
	I1213 11:54:50.799277  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.799286  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:50.799292  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:50.799380  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:50.822964  604010 cri.go:89] found id: ""
	I1213 11:54:50.823033  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.823047  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:50.823054  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:50.823125  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:50.851245  604010 cri.go:89] found id: ""
	I1213 11:54:50.851270  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.851279  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:50.851285  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:50.851346  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:50.877382  604010 cri.go:89] found id: ""
	I1213 11:54:50.877405  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.877414  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:50.877420  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:50.877478  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:50.903657  604010 cri.go:89] found id: ""
	I1213 11:54:50.903681  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.903690  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:50.903696  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:50.903754  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:50.931954  604010 cri.go:89] found id: ""
	I1213 11:54:50.931977  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.931992  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:50.932002  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:50.932016  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:50.988153  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:50.988188  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:51.004868  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:51.004912  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:51.078536  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:51.069572    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.070163    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.071963    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.072503    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.074005    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:51.069572    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.070163    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.071963    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.072503    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.074005    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:51.078558  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:51.078571  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:51.105933  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:51.105979  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:53.638010  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:53.648726  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:53.648799  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:53.692658  604010 cri.go:89] found id: ""
	I1213 11:54:53.692685  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.692693  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:53.692700  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:53.692760  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:53.728295  604010 cri.go:89] found id: ""
	I1213 11:54:53.728326  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.728335  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:53.728343  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:53.728402  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:53.768548  604010 cri.go:89] found id: ""
	I1213 11:54:53.768576  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.768585  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:53.768591  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:53.768649  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:53.808130  604010 cri.go:89] found id: ""
	I1213 11:54:53.808152  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.808161  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:53.808167  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:53.808231  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:53.832811  604010 cri.go:89] found id: ""
	I1213 11:54:53.832839  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.832849  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:53.832856  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:53.832916  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:53.857746  604010 cri.go:89] found id: ""
	I1213 11:54:53.857770  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.857778  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:53.857785  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:53.857844  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:53.881722  604010 cri.go:89] found id: ""
	I1213 11:54:53.881747  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.881756  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:53.881763  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:53.881830  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:53.907820  604010 cri.go:89] found id: ""
	I1213 11:54:53.907844  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.907854  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:53.907864  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:53.907877  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:53.963717  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:53.963753  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:53.979615  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:53.979645  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:54.065903  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:54.056577    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.057248    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.058603    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.059235    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.061166    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:54.056577    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.057248    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.058603    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.059235    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.061166    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:54.065924  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:54.065938  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:54.091653  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:54.091689  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:56.621960  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:56.633738  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:56.633810  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:56.692820  604010 cri.go:89] found id: ""
	I1213 11:54:56.692846  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.692856  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:56.692863  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:56.692924  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:56.758799  604010 cri.go:89] found id: ""
	I1213 11:54:56.758842  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.758870  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:56.758884  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:56.758978  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:56.784490  604010 cri.go:89] found id: ""
	I1213 11:54:56.784516  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.784525  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:56.784532  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:56.784593  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:56.808898  604010 cri.go:89] found id: ""
	I1213 11:54:56.808919  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.808928  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:56.808940  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:56.808998  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:56.833308  604010 cri.go:89] found id: ""
	I1213 11:54:56.833373  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.833398  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:56.833416  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:56.833489  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:56.862468  604010 cri.go:89] found id: ""
	I1213 11:54:56.862543  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.862568  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:56.862588  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:56.862678  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:56.891924  604010 cri.go:89] found id: ""
	I1213 11:54:56.891952  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.891962  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:56.891969  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:56.892033  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:56.916269  604010 cri.go:89] found id: ""
	I1213 11:54:56.916296  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.916306  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:56.916315  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:56.916327  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:56.980544  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:56.971761    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.972786    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.974371    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.974958    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.976490    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:56.971761    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.972786    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.974371    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.974958    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.976490    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:56.980565  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:56.980579  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:57.005423  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:57.005460  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:57.032993  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:57.033071  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:57.088966  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:57.089003  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:59.606260  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:59.617007  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:59.617079  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:59.644389  604010 cri.go:89] found id: ""
	I1213 11:54:59.644411  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.644420  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:59.644427  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:59.644484  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:59.689247  604010 cri.go:89] found id: ""
	I1213 11:54:59.689273  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.689282  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:59.689289  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:59.689348  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:59.729540  604010 cri.go:89] found id: ""
	I1213 11:54:59.729582  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.729591  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:59.729597  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:59.729658  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:59.759256  604010 cri.go:89] found id: ""
	I1213 11:54:59.759286  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.759295  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:59.759301  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:59.759362  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:59.788748  604010 cri.go:89] found id: ""
	I1213 11:54:59.788772  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.788780  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:59.788787  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:59.788846  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:59.817278  604010 cri.go:89] found id: ""
	I1213 11:54:59.817313  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.817322  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:59.817328  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:59.817389  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:59.842756  604010 cri.go:89] found id: ""
	I1213 11:54:59.842780  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.842788  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:59.842794  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:59.842862  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:59.868412  604010 cri.go:89] found id: ""
	I1213 11:54:59.868435  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.868443  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:59.868453  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:59.868464  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:59.924773  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:59.924808  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:59.940672  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:59.940704  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:00.041026  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:00.001683    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.002326    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.007036    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.009108    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.010359    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:00.001683    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.002326    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.007036    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.009108    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.010359    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:00.045695  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:00.045733  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:00.200188  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:00.200291  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:02.798329  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:02.808984  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:02.809067  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:02.836650  604010 cri.go:89] found id: ""
	I1213 11:55:02.836675  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.836684  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:02.836692  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:02.836755  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:02.861812  604010 cri.go:89] found id: ""
	I1213 11:55:02.861837  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.861846  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:02.861853  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:02.861915  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:02.892956  604010 cri.go:89] found id: ""
	I1213 11:55:02.892982  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.892992  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:02.892999  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:02.893061  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:02.921418  604010 cri.go:89] found id: ""
	I1213 11:55:02.921444  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.921454  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:02.921460  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:02.921517  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:02.945971  604010 cri.go:89] found id: ""
	I1213 11:55:02.945998  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.946007  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:02.946013  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:02.946071  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:02.971224  604010 cri.go:89] found id: ""
	I1213 11:55:02.971249  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.971258  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:02.971264  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:02.971322  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:02.996070  604010 cri.go:89] found id: ""
	I1213 11:55:02.996098  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.996107  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:02.996113  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:02.996175  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:03.026595  604010 cri.go:89] found id: ""
	I1213 11:55:03.026628  604010 logs.go:282] 0 containers: []
	W1213 11:55:03.026637  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:03.026647  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:03.026662  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:03.083030  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:03.083068  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:03.099216  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:03.099247  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:03.164245  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:03.155657    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.156486    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.158171    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.158870    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.160386    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:03.155657    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.156486    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.158171    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.158870    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.160386    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:03.164269  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:03.164287  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:03.190063  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:03.190105  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:05.717488  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:05.729517  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:05.729651  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:05.754839  604010 cri.go:89] found id: ""
	I1213 11:55:05.754862  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.754870  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:05.754877  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:05.754935  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:05.779444  604010 cri.go:89] found id: ""
	I1213 11:55:05.779470  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.779478  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:05.779486  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:05.779546  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:05.804435  604010 cri.go:89] found id: ""
	I1213 11:55:05.804460  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.804468  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:05.804475  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:05.804536  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:05.828365  604010 cri.go:89] found id: ""
	I1213 11:55:05.828431  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.828454  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:05.828473  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:05.828538  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:05.853088  604010 cri.go:89] found id: ""
	I1213 11:55:05.853114  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.853123  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:05.853129  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:05.853187  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:05.881265  604010 cri.go:89] found id: ""
	I1213 11:55:05.881288  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.881297  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:05.881303  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:05.881363  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:05.907771  604010 cri.go:89] found id: ""
	I1213 11:55:05.907795  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.907804  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:05.907811  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:05.907881  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:05.932155  604010 cri.go:89] found id: ""
	I1213 11:55:05.932181  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.932189  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:05.932199  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:05.932211  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:05.960440  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:05.960467  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:06.018319  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:06.018357  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:06.034573  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:06.034602  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:06.099936  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:06.091153    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.091939    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.093705    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.094323    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.095974    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:06.091153    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.091939    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.093705    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.094323    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.095974    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:06.099962  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:06.099975  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:08.626581  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:08.637490  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:08.637574  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:08.674556  604010 cri.go:89] found id: ""
	I1213 11:55:08.674581  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.674589  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:08.674598  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:08.674659  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:08.719063  604010 cri.go:89] found id: ""
	I1213 11:55:08.719087  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.719095  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:08.719101  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:08.719166  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:08.761839  604010 cri.go:89] found id: ""
	I1213 11:55:08.761863  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.761872  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:08.761878  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:08.761939  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:08.793242  604010 cri.go:89] found id: ""
	I1213 11:55:08.793266  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.793274  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:08.793281  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:08.793338  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:08.823380  604010 cri.go:89] found id: ""
	I1213 11:55:08.823406  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.823416  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:08.823424  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:08.823488  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:08.849669  604010 cri.go:89] found id: ""
	I1213 11:55:08.849696  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.849705  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:08.849712  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:08.849773  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:08.876618  604010 cri.go:89] found id: ""
	I1213 11:55:08.876684  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.876707  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:08.876726  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:08.876807  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:08.902762  604010 cri.go:89] found id: ""
	I1213 11:55:08.902802  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.902811  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:08.902820  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:08.902833  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:08.918880  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:08.918910  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:08.990155  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:08.981658    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.982141    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.984095    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.984454    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.986001    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:08.981658    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.982141    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.984095    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.984454    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.986001    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:08.990182  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:08.990196  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:09.017239  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:09.017278  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:09.049754  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:09.049785  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:11.607272  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:11.617804  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:11.617876  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:11.646336  604010 cri.go:89] found id: ""
	I1213 11:55:11.646359  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.646368  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:11.646374  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:11.646434  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:11.684464  604010 cri.go:89] found id: ""
	I1213 11:55:11.684490  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.684499  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:11.684505  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:11.684566  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:11.724793  604010 cri.go:89] found id: ""
	I1213 11:55:11.724816  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.724824  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:11.724831  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:11.724890  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:11.760776  604010 cri.go:89] found id: ""
	I1213 11:55:11.760799  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.760807  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:11.760814  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:11.760873  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:11.787122  604010 cri.go:89] found id: ""
	I1213 11:55:11.787195  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.787217  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:11.787237  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:11.787333  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:11.812257  604010 cri.go:89] found id: ""
	I1213 11:55:11.812283  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.812291  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:11.812298  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:11.812359  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:11.837304  604010 cri.go:89] found id: ""
	I1213 11:55:11.837341  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.837350  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:11.837356  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:11.837427  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:11.861726  604010 cri.go:89] found id: ""
	I1213 11:55:11.861759  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.861768  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:11.861778  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:11.861792  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:11.918248  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:11.918285  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:11.934535  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:11.934571  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:12.005308  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:11.993379    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.994149    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.995831    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.996328    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.998145    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:11.993379    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.994149    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.995831    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.996328    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.998145    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:12.005338  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:12.005351  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:12.031381  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:12.031415  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:14.558358  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:14.569230  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:14.569297  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:14.594108  604010 cri.go:89] found id: ""
	I1213 11:55:14.594186  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.594209  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:14.594231  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:14.594306  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:14.617763  604010 cri.go:89] found id: ""
	I1213 11:55:14.617784  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.617818  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:14.617824  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:14.617882  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:14.641477  604010 cri.go:89] found id: ""
	I1213 11:55:14.641499  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.641508  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:14.641514  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:14.641580  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:14.706320  604010 cri.go:89] found id: ""
	I1213 11:55:14.706395  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.706419  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:14.706438  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:14.706530  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:14.750579  604010 cri.go:89] found id: ""
	I1213 11:55:14.750602  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.750611  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:14.750617  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:14.750738  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:14.777264  604010 cri.go:89] found id: ""
	I1213 11:55:14.777299  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.777308  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:14.777321  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:14.777392  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:14.801675  604010 cri.go:89] found id: ""
	I1213 11:55:14.801750  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.801775  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:14.801794  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:14.801878  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:14.826273  604010 cri.go:89] found id: ""
	I1213 11:55:14.826308  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.826317  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:14.826327  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:14.826341  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:14.852456  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:14.852492  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:14.880309  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:14.880337  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:14.935692  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:14.935727  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:14.952137  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:14.952167  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:15.033989  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:15.011900    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.014560    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.015092    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.017168    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.018209    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:15.011900    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.014560    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.015092    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.017168    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.018209    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:17.535599  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:17.547401  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:17.547477  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:17.573160  604010 cri.go:89] found id: ""
	I1213 11:55:17.573190  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.573199  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:17.573206  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:17.573269  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:17.602638  604010 cri.go:89] found id: ""
	I1213 11:55:17.602664  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.602673  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:17.602679  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:17.602761  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:17.628217  604010 cri.go:89] found id: ""
	I1213 11:55:17.628242  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.628251  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:17.628258  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:17.628321  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:17.653857  604010 cri.go:89] found id: ""
	I1213 11:55:17.653923  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.653934  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:17.653941  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:17.654004  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:17.730131  604010 cri.go:89] found id: ""
	I1213 11:55:17.730166  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.730175  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:17.730211  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:17.730290  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:17.764018  604010 cri.go:89] found id: ""
	I1213 11:55:17.764045  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.764053  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:17.764060  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:17.764139  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:17.789006  604010 cri.go:89] found id: ""
	I1213 11:55:17.789029  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.789039  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:17.789045  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:17.789110  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:17.820038  604010 cri.go:89] found id: ""
	I1213 11:55:17.820061  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.820070  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:17.820080  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:17.820091  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:17.845672  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:17.845708  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:17.876520  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:17.876549  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:17.934113  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:17.934148  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:17.950852  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:17.950884  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:18.024225  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:18.014810    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.015320    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.017184    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.017872    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.019543    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:18.014810    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.015320    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.017184    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.017872    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.019543    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:20.526091  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:20.539006  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:20.539072  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:20.568228  604010 cri.go:89] found id: ""
	I1213 11:55:20.568252  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.568260  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:20.568266  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:20.568341  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:20.595603  604010 cri.go:89] found id: ""
	I1213 11:55:20.595632  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.595642  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:20.595648  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:20.595710  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:20.619697  604010 cri.go:89] found id: ""
	I1213 11:55:20.619723  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.619732  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:20.619739  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:20.619801  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:20.644480  604010 cri.go:89] found id: ""
	I1213 11:55:20.644507  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.644516  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:20.644523  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:20.644605  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:20.707263  604010 cri.go:89] found id: ""
	I1213 11:55:20.707286  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.707295  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:20.707301  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:20.707362  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:20.753734  604010 cri.go:89] found id: ""
	I1213 11:55:20.753758  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.753767  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:20.753773  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:20.753832  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:20.779244  604010 cri.go:89] found id: ""
	I1213 11:55:20.779267  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.779275  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:20.779282  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:20.779342  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:20.808050  604010 cri.go:89] found id: ""
	I1213 11:55:20.808127  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.808144  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:20.808155  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:20.808167  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:20.863714  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:20.863751  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:20.879958  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:20.879988  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:20.947629  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:20.938365    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.939048    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.940693    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.941317    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.943088    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:20.938365    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.939048    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.940693    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.941317    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.943088    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:20.947653  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:20.947668  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:20.972884  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:20.972921  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:23.506189  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:23.517150  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:23.517220  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:23.544888  604010 cri.go:89] found id: ""
	I1213 11:55:23.544912  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.544920  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:23.544927  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:23.544992  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:23.571162  604010 cri.go:89] found id: ""
	I1213 11:55:23.571189  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.571197  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:23.571204  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:23.571288  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:23.596593  604010 cri.go:89] found id: ""
	I1213 11:55:23.596618  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.596626  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:23.596633  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:23.596693  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:23.622396  604010 cri.go:89] found id: ""
	I1213 11:55:23.622424  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.622433  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:23.622439  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:23.622541  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:23.648441  604010 cri.go:89] found id: ""
	I1213 11:55:23.648468  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.648478  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:23.648484  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:23.648552  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:23.698559  604010 cri.go:89] found id: ""
	I1213 11:55:23.698586  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.698595  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:23.698601  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:23.698664  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:23.749855  604010 cri.go:89] found id: ""
	I1213 11:55:23.749883  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.749893  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:23.749905  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:23.749964  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:23.781499  604010 cri.go:89] found id: ""
	I1213 11:55:23.781527  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.781536  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:23.781547  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:23.781571  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:23.815145  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:23.815174  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:23.871093  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:23.871128  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:23.887427  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:23.887455  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:23.956327  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:23.948085    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.948683    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.950286    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.950824    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.952300    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:23.948085    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.948683    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.950286    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.950824    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.952300    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:23.956396  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:23.956417  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:26.482024  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:26.492511  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:26.492582  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:26.517699  604010 cri.go:89] found id: ""
	I1213 11:55:26.517777  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.517800  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:26.517818  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:26.517906  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:26.545138  604010 cri.go:89] found id: ""
	I1213 11:55:26.545207  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.545233  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:26.545251  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:26.545341  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:26.570019  604010 cri.go:89] found id: ""
	I1213 11:55:26.570090  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.570116  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:26.570134  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:26.570226  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:26.596752  604010 cri.go:89] found id: ""
	I1213 11:55:26.596831  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.596854  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:26.596869  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:26.596946  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:26.625280  604010 cri.go:89] found id: ""
	I1213 11:55:26.625306  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.625315  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:26.625322  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:26.625379  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:26.655489  604010 cri.go:89] found id: ""
	I1213 11:55:26.655513  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.655522  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:26.655528  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:26.655594  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:26.688001  604010 cri.go:89] found id: ""
	I1213 11:55:26.688028  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.688037  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:26.688043  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:26.688103  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:26.720200  604010 cri.go:89] found id: ""
	I1213 11:55:26.720226  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.720235  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:26.720244  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:26.720255  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:26.751334  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:26.751368  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:26.791793  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:26.791819  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:26.847456  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:26.847493  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:26.864079  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:26.864109  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:26.927248  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:26.919337    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.920135    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.921687    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.921990    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.923429    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:26.919337    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.920135    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.921687    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.921990    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.923429    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:29.427521  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:29.438225  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:29.438297  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:29.463111  604010 cri.go:89] found id: ""
	I1213 11:55:29.463137  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.463146  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:29.463154  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:29.463222  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:29.488474  604010 cri.go:89] found id: ""
	I1213 11:55:29.488504  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.488513  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:29.488519  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:29.488580  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:29.514792  604010 cri.go:89] found id: ""
	I1213 11:55:29.514815  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.514824  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:29.514830  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:29.514890  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:29.540502  604010 cri.go:89] found id: ""
	I1213 11:55:29.540528  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.540537  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:29.540544  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:29.540623  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:29.569010  604010 cri.go:89] found id: ""
	I1213 11:55:29.569035  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.569044  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:29.569050  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:29.569143  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:29.597354  604010 cri.go:89] found id: ""
	I1213 11:55:29.597381  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.597390  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:29.597396  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:29.597482  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:29.622205  604010 cri.go:89] found id: ""
	I1213 11:55:29.622230  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.622239  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:29.622245  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:29.622321  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:29.649830  604010 cri.go:89] found id: ""
	I1213 11:55:29.649856  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.649865  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:29.649874  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:29.649914  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:29.717017  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:29.717058  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:29.745372  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:29.745398  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:29.821563  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:29.813336    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.813962    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.815565    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.815994    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.817582    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:29.813336    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.813962    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.815565    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.815994    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.817582    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:29.821589  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:29.821603  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:29.847167  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:29.847206  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:32.379999  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:32.394044  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:32.394117  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:32.419725  604010 cri.go:89] found id: ""
	I1213 11:55:32.419751  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.419759  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:32.419767  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:32.419827  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:32.448514  604010 cri.go:89] found id: ""
	I1213 11:55:32.448537  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.448546  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:32.448552  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:32.448614  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:32.474220  604010 cri.go:89] found id: ""
	I1213 11:55:32.474257  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.474266  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:32.474272  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:32.474331  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:32.501945  604010 cri.go:89] found id: ""
	I1213 11:55:32.501970  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.501980  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:32.501987  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:32.502051  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:32.529117  604010 cri.go:89] found id: ""
	I1213 11:55:32.529143  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.529151  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:32.529159  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:32.529220  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:32.558516  604010 cri.go:89] found id: ""
	I1213 11:55:32.558545  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.558554  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:32.558563  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:32.558624  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:32.584351  604010 cri.go:89] found id: ""
	I1213 11:55:32.584375  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.584383  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:32.584390  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:32.584459  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:32.610180  604010 cri.go:89] found id: ""
	I1213 11:55:32.610203  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.610212  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:32.610222  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:32.610233  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:32.668609  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:32.668647  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:32.687093  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:32.687199  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:32.806632  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:32.798550    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.799065    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.800667    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.801088    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.802806    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:32.798550    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.799065    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.800667    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.801088    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.802806    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:32.806658  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:32.806670  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:32.832549  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:32.832585  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:35.361963  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:35.372809  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:35.372881  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:35.398138  604010 cri.go:89] found id: ""
	I1213 11:55:35.398164  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.398172  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:35.398178  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:35.398238  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:35.423828  604010 cri.go:89] found id: ""
	I1213 11:55:35.423854  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.423863  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:35.423870  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:35.423934  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:35.453483  604010 cri.go:89] found id: ""
	I1213 11:55:35.453508  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.453518  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:35.453524  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:35.453617  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:35.478270  604010 cri.go:89] found id: ""
	I1213 11:55:35.478294  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.478303  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:35.478310  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:35.478373  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:35.508196  604010 cri.go:89] found id: ""
	I1213 11:55:35.508226  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.508235  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:35.508242  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:35.508327  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:35.537327  604010 cri.go:89] found id: ""
	I1213 11:55:35.537359  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.537369  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:35.537401  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:35.537490  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:35.564387  604010 cri.go:89] found id: ""
	I1213 11:55:35.564412  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.564420  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:35.564427  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:35.564483  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:35.589741  604010 cri.go:89] found id: ""
	I1213 11:55:35.589766  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.589776  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:35.589787  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:35.589798  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:35.645240  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:35.645275  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:35.672440  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:35.672532  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:35.779839  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:35.770429    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.771175    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.772996    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.773416    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.775177    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:35.770429    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.771175    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.772996    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.773416    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.775177    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:35.779861  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:35.779874  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:35.804945  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:35.804983  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:38.336379  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:38.347209  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:38.347278  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:38.372679  604010 cri.go:89] found id: ""
	I1213 11:55:38.372706  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.372716  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:38.372723  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:38.372781  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:38.401308  604010 cri.go:89] found id: ""
	I1213 11:55:38.401340  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.401354  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:38.401360  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:38.401428  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:38.425990  604010 cri.go:89] found id: ""
	I1213 11:55:38.426025  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.426034  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:38.426040  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:38.426097  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:38.452858  604010 cri.go:89] found id: ""
	I1213 11:55:38.452884  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.452892  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:38.452900  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:38.452958  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:38.477766  604010 cri.go:89] found id: ""
	I1213 11:55:38.477791  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.477800  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:38.477807  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:38.477876  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:38.503003  604010 cri.go:89] found id: ""
	I1213 11:55:38.503028  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.503037  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:38.503043  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:38.503110  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:38.532923  604010 cri.go:89] found id: ""
	I1213 11:55:38.532946  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.532955  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:38.532962  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:38.533021  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:38.561367  604010 cri.go:89] found id: ""
	I1213 11:55:38.561389  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.561397  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:38.561406  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:38.561425  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:38.627276  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:38.618551    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.619310    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.621183    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.621748    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.623328    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:38.618551    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.619310    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.621183    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.621748    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.623328    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:38.627341  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:38.627361  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:38.652980  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:38.653021  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:38.702202  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:38.702236  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:38.775658  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:38.775742  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:41.293324  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:41.304911  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:41.304988  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:41.329954  604010 cri.go:89] found id: ""
	I1213 11:55:41.329981  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.329990  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:41.329997  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:41.330068  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:41.356810  604010 cri.go:89] found id: ""
	I1213 11:55:41.356835  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.356845  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:41.356851  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:41.356911  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:41.382782  604010 cri.go:89] found id: ""
	I1213 11:55:41.382807  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.382816  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:41.382823  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:41.382882  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:41.411145  604010 cri.go:89] found id: ""
	I1213 11:55:41.411170  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.411179  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:41.411186  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:41.411242  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:41.439686  604010 cri.go:89] found id: ""
	I1213 11:55:41.439713  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.439722  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:41.439729  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:41.439797  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:41.463861  604010 cri.go:89] found id: ""
	I1213 11:55:41.463884  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.463893  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:41.463900  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:41.463958  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:41.488219  604010 cri.go:89] found id: ""
	I1213 11:55:41.488243  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.488252  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:41.488258  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:41.488339  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:41.513569  604010 cri.go:89] found id: ""
	I1213 11:55:41.513600  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.513609  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:41.513619  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:41.513656  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:41.570549  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:41.570585  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:41.587559  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:41.587588  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:41.654460  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:41.646598    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.647136    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.648610    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.649143    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.650745    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:41.646598    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.647136    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.648610    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.649143    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.650745    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:41.654481  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:41.654494  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:41.679884  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:41.679918  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:44.238824  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:44.249658  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:44.249735  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:44.274262  604010 cri.go:89] found id: ""
	I1213 11:55:44.274287  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.274297  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:44.274303  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:44.274365  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:44.298725  604010 cri.go:89] found id: ""
	I1213 11:55:44.298750  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.298759  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:44.298765  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:44.298831  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:44.332989  604010 cri.go:89] found id: ""
	I1213 11:55:44.333019  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.333028  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:44.333035  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:44.333095  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:44.358205  604010 cri.go:89] found id: ""
	I1213 11:55:44.358229  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.358238  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:44.358250  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:44.358313  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:44.383989  604010 cri.go:89] found id: ""
	I1213 11:55:44.384017  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.384027  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:44.384034  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:44.384099  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:44.409651  604010 cri.go:89] found id: ""
	I1213 11:55:44.409677  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.409686  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:44.409692  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:44.409751  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:44.435253  604010 cri.go:89] found id: ""
	I1213 11:55:44.435280  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.435288  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:44.435295  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:44.435354  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:44.459342  604010 cri.go:89] found id: ""
	I1213 11:55:44.459379  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.459388  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:44.459398  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:44.459409  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:44.527760  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:44.518804    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.519537    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.521331    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.521838    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.523375    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:44.518804    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.519537    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.521331    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.521838    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.523375    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:44.527781  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:44.527793  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:44.554052  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:44.554086  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:44.583553  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:44.583582  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:44.639690  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:44.639723  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:47.156860  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:47.167658  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:47.167728  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:47.191689  604010 cri.go:89] found id: ""
	I1213 11:55:47.191714  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.191723  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:47.191730  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:47.191790  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:47.217625  604010 cri.go:89] found id: ""
	I1213 11:55:47.217652  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.217665  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:47.217679  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:47.217756  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:47.246057  604010 cri.go:89] found id: ""
	I1213 11:55:47.246080  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.246088  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:47.246094  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:47.246153  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:47.272649  604010 cri.go:89] found id: ""
	I1213 11:55:47.272673  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.272682  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:47.272688  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:47.272747  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:47.297156  604010 cri.go:89] found id: ""
	I1213 11:55:47.297178  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.297186  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:47.297192  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:47.297249  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:47.321533  604010 cri.go:89] found id: ""
	I1213 11:55:47.321555  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.321563  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:47.321570  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:47.321647  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:47.347526  604010 cri.go:89] found id: ""
	I1213 11:55:47.347548  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.347558  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:47.347566  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:47.347743  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:47.373360  604010 cri.go:89] found id: ""
	I1213 11:55:47.373437  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.373466  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:47.373491  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:47.373544  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:47.406388  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:47.406463  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:47.467132  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:47.467169  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:47.482951  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:47.482977  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:47.547530  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:47.538747    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.539246    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.540864    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.541466    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.543147    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:47.538747    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.539246    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.540864    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.541466    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.543147    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:47.547599  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:47.547625  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:50.076734  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:50.088146  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:50.088221  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:50.114846  604010 cri.go:89] found id: ""
	I1213 11:55:50.114871  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.114879  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:50.114885  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:50.114952  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:50.140346  604010 cri.go:89] found id: ""
	I1213 11:55:50.140383  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.140393  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:50.140400  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:50.140461  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:50.165612  604010 cri.go:89] found id: ""
	I1213 11:55:50.165647  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.165656  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:50.165663  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:50.165735  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:50.193167  604010 cri.go:89] found id: ""
	I1213 11:55:50.193196  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.193205  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:50.193211  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:50.193288  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:50.217552  604010 cri.go:89] found id: ""
	I1213 11:55:50.217602  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.217622  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:50.217630  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:50.217703  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:50.243207  604010 cri.go:89] found id: ""
	I1213 11:55:50.243230  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.243240  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:50.243246  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:50.243306  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:50.267889  604010 cri.go:89] found id: ""
	I1213 11:55:50.267961  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.267980  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:50.267988  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:50.268050  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:50.293393  604010 cri.go:89] found id: ""
	I1213 11:55:50.293420  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.293429  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:50.293448  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:50.293461  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:50.358945  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:50.350414    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.351257    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.352886    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.353223    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.354777    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:50.350414    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.351257    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.352886    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.353223    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.354777    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:50.358967  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:50.358982  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:50.384886  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:50.384922  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:50.416671  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:50.416697  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:50.472398  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:50.472437  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:52.988724  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:53.000673  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:53.000825  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:53.028787  604010 cri.go:89] found id: ""
	I1213 11:55:53.028812  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.028822  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:53.028829  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:53.028960  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:53.059024  604010 cri.go:89] found id: ""
	I1213 11:55:53.059060  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.059069  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:53.059076  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:53.059137  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:53.084415  604010 cri.go:89] found id: ""
	I1213 11:55:53.084443  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.084452  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:53.084459  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:53.084519  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:53.111367  604010 cri.go:89] found id: ""
	I1213 11:55:53.111402  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.111413  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:53.111420  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:53.111485  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:53.138948  604010 cri.go:89] found id: ""
	I1213 11:55:53.138973  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.138992  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:53.138999  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:53.139058  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:53.164317  604010 cri.go:89] found id: ""
	I1213 11:55:53.164341  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.164350  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:53.164363  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:53.164420  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:53.189237  604010 cri.go:89] found id: ""
	I1213 11:55:53.189263  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.189284  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:53.189291  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:53.189365  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:53.213792  604010 cri.go:89] found id: ""
	I1213 11:55:53.213831  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.213840  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:53.213849  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:53.213864  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:53.268812  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:53.268852  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:53.284561  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:53.284592  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:53.350505  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:53.342240    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.342928    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.344529    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.345039    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.346717    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:53.342240    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.342928    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.344529    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.345039    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.346717    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:53.350528  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:53.350540  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:53.375550  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:53.375586  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:55.903770  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:55.916528  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:55.916606  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:55.974216  604010 cri.go:89] found id: ""
	I1213 11:55:55.974238  604010 logs.go:282] 0 containers: []
	W1213 11:55:55.974246  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:55.974254  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:55.974316  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:56.009212  604010 cri.go:89] found id: ""
	I1213 11:55:56.009235  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.009243  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:56.009250  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:56.009308  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:56.036696  604010 cri.go:89] found id: ""
	I1213 11:55:56.036722  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.036731  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:56.036738  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:56.036821  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:56.062550  604010 cri.go:89] found id: ""
	I1213 11:55:56.062577  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.062586  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:56.062592  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:56.062649  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:56.087384  604010 cri.go:89] found id: ""
	I1213 11:55:56.087410  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.087419  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:56.087425  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:56.087506  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:56.113129  604010 cri.go:89] found id: ""
	I1213 11:55:56.113153  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.113164  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:56.113171  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:56.113234  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:56.137999  604010 cri.go:89] found id: ""
	I1213 11:55:56.138021  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.138030  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:56.138036  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:56.138094  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:56.164815  604010 cri.go:89] found id: ""
	I1213 11:55:56.164841  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.164851  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:56.164861  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:56.164872  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:56.190007  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:56.190042  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:56.222068  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:56.222097  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:56.277067  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:56.277104  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:56.293465  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:56.293495  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:56.360755  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:56.351282    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.352626    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.353483    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.354403    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.356173    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:56.351282    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.352626    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.353483    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.354403    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.356173    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:58.861486  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:58.872284  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:58.872365  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:58.898051  604010 cri.go:89] found id: ""
	I1213 11:55:58.898077  604010 logs.go:282] 0 containers: []
	W1213 11:55:58.898086  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:58.898093  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:58.898152  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:58.937804  604010 cri.go:89] found id: ""
	I1213 11:55:58.937834  604010 logs.go:282] 0 containers: []
	W1213 11:55:58.937852  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:58.937865  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:58.937957  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:58.987256  604010 cri.go:89] found id: ""
	I1213 11:55:58.987290  604010 logs.go:282] 0 containers: []
	W1213 11:55:58.987301  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:58.987308  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:58.987378  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:59.018252  604010 cri.go:89] found id: ""
	I1213 11:55:59.018274  604010 logs.go:282] 0 containers: []
	W1213 11:55:59.018282  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:59.018289  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:59.018350  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:59.046993  604010 cri.go:89] found id: ""
	I1213 11:55:59.047018  604010 logs.go:282] 0 containers: []
	W1213 11:55:59.047027  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:59.047033  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:59.047089  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:59.072813  604010 cri.go:89] found id: ""
	I1213 11:55:59.072888  604010 logs.go:282] 0 containers: []
	W1213 11:55:59.072903  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:59.072913  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:59.072988  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:59.097766  604010 cri.go:89] found id: ""
	I1213 11:55:59.097792  604010 logs.go:282] 0 containers: []
	W1213 11:55:59.097801  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:59.097808  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:59.097868  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:59.125013  604010 cri.go:89] found id: ""
	I1213 11:55:59.125038  604010 logs.go:282] 0 containers: []
	W1213 11:55:59.125047  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:59.125056  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:59.125070  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:59.150130  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:59.150164  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:59.178033  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:59.178107  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:59.233761  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:59.233795  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:59.249736  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:59.249772  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:59.314577  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:59.305285    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.306134    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.307637    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.308126    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.310000    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:59.305285    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.306134    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.307637    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.308126    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.310000    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:01.814837  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:01.826268  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:01.826352  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:01.856935  604010 cri.go:89] found id: ""
	I1213 11:56:01.856960  604010 logs.go:282] 0 containers: []
	W1213 11:56:01.856969  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:01.856979  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:01.857039  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:01.884429  604010 cri.go:89] found id: ""
	I1213 11:56:01.884454  604010 logs.go:282] 0 containers: []
	W1213 11:56:01.884463  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:01.884470  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:01.884530  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:01.929790  604010 cri.go:89] found id: ""
	I1213 11:56:01.929812  604010 logs.go:282] 0 containers: []
	W1213 11:56:01.929821  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:01.929828  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:01.929890  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:01.997657  604010 cri.go:89] found id: ""
	I1213 11:56:01.997686  604010 logs.go:282] 0 containers: []
	W1213 11:56:01.997703  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:01.997713  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:01.997785  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:02.027667  604010 cri.go:89] found id: ""
	I1213 11:56:02.027692  604010 logs.go:282] 0 containers: []
	W1213 11:56:02.027701  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:02.027707  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:02.027770  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:02.052911  604010 cri.go:89] found id: ""
	I1213 11:56:02.052935  604010 logs.go:282] 0 containers: []
	W1213 11:56:02.052944  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:02.052950  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:02.053009  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:02.078744  604010 cri.go:89] found id: ""
	I1213 11:56:02.078813  604010 logs.go:282] 0 containers: []
	W1213 11:56:02.078839  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:02.078857  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:02.078946  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:02.104065  604010 cri.go:89] found id: ""
	I1213 11:56:02.104136  604010 logs.go:282] 0 containers: []
	W1213 11:56:02.104158  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:02.104181  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:02.104219  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:02.177602  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:02.166576    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.167162    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.170937    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.171543    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.173272    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:02.166576    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.167162    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.170937    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.171543    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.173272    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:02.177623  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:02.177635  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:02.203025  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:02.203064  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:02.232249  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:02.232275  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:02.288746  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:02.288781  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:04.806667  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:04.817452  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:04.817526  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:04.843671  604010 cri.go:89] found id: ""
	I1213 11:56:04.843696  604010 logs.go:282] 0 containers: []
	W1213 11:56:04.843705  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:04.843712  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:04.843770  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:04.869847  604010 cri.go:89] found id: ""
	I1213 11:56:04.869873  604010 logs.go:282] 0 containers: []
	W1213 11:56:04.869882  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:04.869889  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:04.869949  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:04.895727  604010 cri.go:89] found id: ""
	I1213 11:56:04.895750  604010 logs.go:282] 0 containers: []
	W1213 11:56:04.895759  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:04.895766  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:04.895874  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:04.958057  604010 cri.go:89] found id: ""
	I1213 11:56:04.958083  604010 logs.go:282] 0 containers: []
	W1213 11:56:04.958093  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:04.958102  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:04.958164  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:05.011151  604010 cri.go:89] found id: ""
	I1213 11:56:05.011180  604010 logs.go:282] 0 containers: []
	W1213 11:56:05.011191  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:05.011198  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:05.011301  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:05.042226  604010 cri.go:89] found id: ""
	I1213 11:56:05.042257  604010 logs.go:282] 0 containers: []
	W1213 11:56:05.042267  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:05.042274  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:05.042344  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:05.067033  604010 cri.go:89] found id: ""
	I1213 11:56:05.067057  604010 logs.go:282] 0 containers: []
	W1213 11:56:05.067066  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:05.067073  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:05.067137  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:05.092704  604010 cri.go:89] found id: ""
	I1213 11:56:05.092729  604010 logs.go:282] 0 containers: []
	W1213 11:56:05.092740  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:05.092751  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:05.092789  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:05.149091  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:05.149142  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:05.165497  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:05.165536  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:05.234289  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:05.225131    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.225892    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.227653    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.228318    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.230170    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:05.225131    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.225892    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.227653    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.228318    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.230170    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:05.234313  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:05.234326  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:05.259839  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:05.259877  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:07.795276  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:07.805797  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:07.805865  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:07.833431  604010 cri.go:89] found id: ""
	I1213 11:56:07.833458  604010 logs.go:282] 0 containers: []
	W1213 11:56:07.833467  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:07.833474  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:07.833533  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:07.859570  604010 cri.go:89] found id: ""
	I1213 11:56:07.859596  604010 logs.go:282] 0 containers: []
	W1213 11:56:07.859605  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:07.859612  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:07.859680  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:07.885597  604010 cri.go:89] found id: ""
	I1213 11:56:07.885621  604010 logs.go:282] 0 containers: []
	W1213 11:56:07.885630  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:07.885636  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:07.885693  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:07.932272  604010 cri.go:89] found id: ""
	I1213 11:56:07.932295  604010 logs.go:282] 0 containers: []
	W1213 11:56:07.932304  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:07.932311  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:07.932368  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:07.971123  604010 cri.go:89] found id: ""
	I1213 11:56:07.971146  604010 logs.go:282] 0 containers: []
	W1213 11:56:07.971156  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:07.971162  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:07.971223  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:08.020370  604010 cri.go:89] found id: ""
	I1213 11:56:08.020442  604010 logs.go:282] 0 containers: []
	W1213 11:56:08.020470  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:08.020488  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:08.020576  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:08.050772  604010 cri.go:89] found id: ""
	I1213 11:56:08.050843  604010 logs.go:282] 0 containers: []
	W1213 11:56:08.050870  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:08.050888  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:08.050977  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:08.076860  604010 cri.go:89] found id: ""
	I1213 11:56:08.076891  604010 logs.go:282] 0 containers: []
	W1213 11:56:08.076901  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:08.076911  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:08.076923  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:08.136737  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:08.136772  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:08.152700  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:08.152856  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:08.216955  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:08.208521    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.209263    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.210851    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.211330    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.212940    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:08.208521    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.209263    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.210851    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.211330    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.212940    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:08.217027  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:08.217055  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:08.242524  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:08.242562  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:10.774825  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:10.785504  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:10.785573  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:10.812402  604010 cri.go:89] found id: ""
	I1213 11:56:10.812424  604010 logs.go:282] 0 containers: []
	W1213 11:56:10.812433  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:10.812440  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:10.812495  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:10.837362  604010 cri.go:89] found id: ""
	I1213 11:56:10.837387  604010 logs.go:282] 0 containers: []
	W1213 11:56:10.837396  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:10.837402  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:10.837461  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:10.862348  604010 cri.go:89] found id: ""
	I1213 11:56:10.862374  604010 logs.go:282] 0 containers: []
	W1213 11:56:10.862382  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:10.862389  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:10.862447  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:10.886922  604010 cri.go:89] found id: ""
	I1213 11:56:10.886999  604010 logs.go:282] 0 containers: []
	W1213 11:56:10.887020  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:10.887038  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:10.887121  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:10.931278  604010 cri.go:89] found id: ""
	I1213 11:56:10.931347  604010 logs.go:282] 0 containers: []
	W1213 11:56:10.931369  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:10.931387  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:10.931475  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:10.974160  604010 cri.go:89] found id: ""
	I1213 11:56:10.974226  604010 logs.go:282] 0 containers: []
	W1213 11:56:10.974254  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:10.974272  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:10.974357  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:11.010218  604010 cri.go:89] found id: ""
	I1213 11:56:11.010290  604010 logs.go:282] 0 containers: []
	W1213 11:56:11.010313  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:11.010332  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:11.010424  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:11.039062  604010 cri.go:89] found id: ""
	I1213 11:56:11.039097  604010 logs.go:282] 0 containers: []
	W1213 11:56:11.039108  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:11.039118  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:11.039130  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:11.095996  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:11.096035  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:11.112552  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:11.112583  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:11.181416  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:11.172048    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.172697    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.174491    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.175376    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.177169    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:11.172048    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.172697    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.174491    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.175376    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.177169    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:11.181436  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:11.181451  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:11.206963  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:11.207000  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:13.739447  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:13.750286  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:13.750359  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:13.776350  604010 cri.go:89] found id: ""
	I1213 11:56:13.776379  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.776388  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:13.776395  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:13.776460  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:13.800680  604010 cri.go:89] found id: ""
	I1213 11:56:13.800705  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.800714  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:13.800721  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:13.800780  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:13.826000  604010 cri.go:89] found id: ""
	I1213 11:56:13.826038  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.826050  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:13.826072  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:13.826155  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:13.850538  604010 cri.go:89] found id: ""
	I1213 11:56:13.850564  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.850582  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:13.850611  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:13.850706  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:13.879462  604010 cri.go:89] found id: ""
	I1213 11:56:13.879488  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.879496  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:13.879503  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:13.879559  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:13.904388  604010 cri.go:89] found id: ""
	I1213 11:56:13.904414  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.904422  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:13.904432  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:13.904488  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:13.936193  604010 cri.go:89] found id: ""
	I1213 11:56:13.936221  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.936229  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:13.936236  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:13.936304  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:13.979520  604010 cri.go:89] found id: ""
	I1213 11:56:13.979547  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.979556  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:13.979566  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:13.979577  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:14.047872  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:14.047909  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:14.064531  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:14.064559  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:14.132145  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:14.123439    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.124184    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.125827    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.126337    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.128067    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:14.123439    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.124184    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.125827    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.126337    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.128067    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:14.132167  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:14.132180  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:14.158143  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:14.158181  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:16.686213  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:16.696766  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:16.696836  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:16.720811  604010 cri.go:89] found id: ""
	I1213 11:56:16.720840  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.720849  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:16.720856  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:16.720916  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:16.746135  604010 cri.go:89] found id: ""
	I1213 11:56:16.746162  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.746170  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:16.746177  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:16.746235  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:16.772135  604010 cri.go:89] found id: ""
	I1213 11:56:16.772162  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.772171  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:16.772177  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:16.772263  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:16.801712  604010 cri.go:89] found id: ""
	I1213 11:56:16.801738  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.801748  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:16.801754  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:16.801813  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:16.825625  604010 cri.go:89] found id: ""
	I1213 11:56:16.825649  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.825658  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:16.825664  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:16.825723  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:16.850464  604010 cri.go:89] found id: ""
	I1213 11:56:16.850490  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.850498  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:16.850505  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:16.850561  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:16.882804  604010 cri.go:89] found id: ""
	I1213 11:56:16.882826  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.882835  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:16.882848  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:16.882906  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:16.908046  604010 cri.go:89] found id: ""
	I1213 11:56:16.908071  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.908080  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:16.908090  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:16.908104  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:17.008503  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:17.008590  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:17.024851  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:17.024884  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:17.092834  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:17.083994    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.084849    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.086559    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.087267    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.088871    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:17.083994    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.084849    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.086559    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.087267    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.088871    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:17.092854  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:17.092867  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:17.118299  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:17.118334  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:19.647201  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:19.658196  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:19.658313  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:19.681845  604010 cri.go:89] found id: ""
	I1213 11:56:19.681924  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.681947  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:19.681966  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:19.682053  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:19.707693  604010 cri.go:89] found id: ""
	I1213 11:56:19.707717  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.707727  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:19.707733  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:19.707809  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:19.732762  604010 cri.go:89] found id: ""
	I1213 11:56:19.732788  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.732797  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:19.732804  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:19.732884  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:19.757359  604010 cri.go:89] found id: ""
	I1213 11:56:19.757393  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.757402  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:19.757423  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:19.757500  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:19.785446  604010 cri.go:89] found id: ""
	I1213 11:56:19.785473  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.785482  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:19.785489  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:19.785610  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:19.812583  604010 cri.go:89] found id: ""
	I1213 11:56:19.812607  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.812616  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:19.812623  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:19.812681  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:19.836875  604010 cri.go:89] found id: ""
	I1213 11:56:19.836901  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.836910  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:19.836919  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:19.837022  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:19.861557  604010 cri.go:89] found id: ""
	I1213 11:56:19.861584  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.861595  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:19.861610  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:19.861631  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:19.920472  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:19.920510  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:19.973429  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:19.973459  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:20.062908  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:20.053401    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.054064    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.055967    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.056677    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.058665    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:20.053401    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.054064    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.055967    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.056677    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.058665    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:20.062932  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:20.062945  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:20.089847  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:20.089889  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:22.621952  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:22.633355  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:22.633434  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:22.661131  604010 cri.go:89] found id: ""
	I1213 11:56:22.661156  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.661165  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:22.661172  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:22.661231  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:22.687274  604010 cri.go:89] found id: ""
	I1213 11:56:22.687309  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.687319  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:22.687325  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:22.687385  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:22.712134  604010 cri.go:89] found id: ""
	I1213 11:56:22.712162  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.712177  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:22.712184  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:22.712243  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:22.737658  604010 cri.go:89] found id: ""
	I1213 11:56:22.737684  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.737693  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:22.737699  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:22.737756  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:22.762933  604010 cri.go:89] found id: ""
	I1213 11:56:22.762958  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.762966  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:22.762973  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:22.763030  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:22.787428  604010 cri.go:89] found id: ""
	I1213 11:56:22.787453  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.787463  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:22.787469  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:22.787531  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:22.812716  604010 cri.go:89] found id: ""
	I1213 11:56:22.812746  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.812754  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:22.812761  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:22.812849  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:22.837817  604010 cri.go:89] found id: ""
	I1213 11:56:22.837844  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.837853  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:22.837863  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:22.837883  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:22.893260  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:22.893294  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:22.917278  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:22.917388  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:23.026082  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:23.017267    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.017959    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.019734    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.020131    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.021757    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:23.017267    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.017959    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.019734    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.020131    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.021757    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:23.026106  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:23.026120  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:23.052026  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:23.052065  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:25.580545  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:25.591333  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:25.591403  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:25.616731  604010 cri.go:89] found id: ""
	I1213 11:56:25.616754  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.616764  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:25.616771  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:25.616827  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:25.646111  604010 cri.go:89] found id: ""
	I1213 11:56:25.646135  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.646144  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:25.646151  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:25.646212  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:25.674261  604010 cri.go:89] found id: ""
	I1213 11:56:25.674284  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.674293  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:25.674300  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:25.674358  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:25.700613  604010 cri.go:89] found id: ""
	I1213 11:56:25.700636  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.700644  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:25.700650  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:25.700707  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:25.728704  604010 cri.go:89] found id: ""
	I1213 11:56:25.728789  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.728805  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:25.728818  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:25.728885  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:25.761516  604010 cri.go:89] found id: ""
	I1213 11:56:25.761538  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.761548  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:25.761555  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:25.761635  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:25.786867  604010 cri.go:89] found id: ""
	I1213 11:56:25.786895  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.786905  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:25.786911  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:25.786970  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:25.811462  604010 cri.go:89] found id: ""
	I1213 11:56:25.811485  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.811493  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:25.811503  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:25.811514  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:25.866924  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:25.866955  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:25.883500  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:25.883530  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:25.977779  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:25.966190    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.967164    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.969705    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.971514    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.972246    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:25.966190    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.967164    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.969705    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.971514    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.972246    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:25.977806  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:25.977819  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:26.009949  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:26.010030  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:28.542187  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:28.552481  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:28.552607  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:28.581578  604010 cri.go:89] found id: ""
	I1213 11:56:28.581611  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.581627  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:28.581634  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:28.581690  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:28.607125  604010 cri.go:89] found id: ""
	I1213 11:56:28.607149  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.607157  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:28.607163  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:28.607220  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:28.632720  604010 cri.go:89] found id: ""
	I1213 11:56:28.632747  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.632758  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:28.632765  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:28.632822  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:28.658222  604010 cri.go:89] found id: ""
	I1213 11:56:28.658251  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.658260  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:28.658267  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:28.658325  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:28.682387  604010 cri.go:89] found id: ""
	I1213 11:56:28.682425  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.682436  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:28.682443  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:28.682519  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:28.707965  604010 cri.go:89] found id: ""
	I1213 11:56:28.708001  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.708011  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:28.708024  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:28.708094  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:28.737087  604010 cri.go:89] found id: ""
	I1213 11:56:28.737115  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.737124  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:28.737130  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:28.737189  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:28.761982  604010 cri.go:89] found id: ""
	I1213 11:56:28.762059  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.762081  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:28.762108  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:28.762148  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:28.817649  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:28.817687  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:28.833874  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:28.833904  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:28.901287  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:28.892846    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.893499    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.895107    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.895608    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.897226    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:28.892846    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.893499    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.895107    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.895608    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.897226    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:28.901308  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:28.901319  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:28.943036  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:28.943114  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:31.504085  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:31.516702  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:31.516776  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:31.541829  604010 cri.go:89] found id: ""
	I1213 11:56:31.541852  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.541861  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:31.541868  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:31.541927  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:31.567128  604010 cri.go:89] found id: ""
	I1213 11:56:31.567153  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.567162  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:31.567169  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:31.567228  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:31.592889  604010 cri.go:89] found id: ""
	I1213 11:56:31.592914  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.592924  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:31.592931  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:31.592988  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:31.620810  604010 cri.go:89] found id: ""
	I1213 11:56:31.620834  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.620843  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:31.620850  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:31.620907  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:31.645931  604010 cri.go:89] found id: ""
	I1213 11:56:31.645958  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.645968  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:31.645975  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:31.646034  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:31.671037  604010 cri.go:89] found id: ""
	I1213 11:56:31.671065  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.671074  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:31.671116  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:31.671180  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:31.696779  604010 cri.go:89] found id: ""
	I1213 11:56:31.696805  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.696814  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:31.696820  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:31.696886  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:31.721074  604010 cri.go:89] found id: ""
	I1213 11:56:31.721152  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.721175  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:31.721198  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:31.721238  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:31.776685  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:31.776720  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:31.793212  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:31.793241  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:31.856954  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:31.848666   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.849288   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.850793   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.851220   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.852660   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:31.848666   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.849288   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.850793   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.851220   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.852660   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:31.857017  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:31.857044  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:31.882038  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:31.882070  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:34.425618  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:34.436018  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:34.436163  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:34.460322  604010 cri.go:89] found id: ""
	I1213 11:56:34.460347  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.460356  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:34.460362  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:34.460442  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:34.484514  604010 cri.go:89] found id: ""
	I1213 11:56:34.484582  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.484607  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:34.484622  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:34.484695  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:34.513969  604010 cri.go:89] found id: ""
	I1213 11:56:34.514006  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.514016  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:34.514023  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:34.514089  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:34.541219  604010 cri.go:89] found id: ""
	I1213 11:56:34.541245  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.541254  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:34.541260  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:34.541323  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:34.570631  604010 cri.go:89] found id: ""
	I1213 11:56:34.570653  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.570662  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:34.570668  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:34.570749  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:34.594597  604010 cri.go:89] found id: ""
	I1213 11:56:34.594636  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.594645  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:34.594651  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:34.594741  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:34.618131  604010 cri.go:89] found id: ""
	I1213 11:56:34.618159  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.618168  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:34.618174  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:34.618230  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:34.645177  604010 cri.go:89] found id: ""
	I1213 11:56:34.645204  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.645213  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:34.645223  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:34.645235  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:34.674203  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:34.674235  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:34.731298  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:34.731332  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:34.747591  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:34.747623  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:34.811066  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:34.802515   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.803209   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.804716   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.805051   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.806504   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:34.802515   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.803209   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.804716   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.805051   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.806504   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:34.811137  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:34.811171  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:37.342058  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:37.352580  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:37.352649  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:37.376663  604010 cri.go:89] found id: ""
	I1213 11:56:37.376689  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.376698  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:37.376704  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:37.376763  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:37.400694  604010 cri.go:89] found id: ""
	I1213 11:56:37.400720  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.400728  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:37.400735  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:37.400796  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:37.425687  604010 cri.go:89] found id: ""
	I1213 11:56:37.425715  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.425724  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:37.425730  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:37.425787  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:37.450160  604010 cri.go:89] found id: ""
	I1213 11:56:37.450189  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.450198  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:37.450205  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:37.450266  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:37.475110  604010 cri.go:89] found id: ""
	I1213 11:56:37.475133  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.475142  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:37.475149  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:37.475207  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:37.499102  604010 cri.go:89] found id: ""
	I1213 11:56:37.499171  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.499196  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:37.499207  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:37.499282  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:37.528584  604010 cri.go:89] found id: ""
	I1213 11:56:37.528609  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.528618  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:37.528624  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:37.528708  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:37.554175  604010 cri.go:89] found id: ""
	I1213 11:56:37.554259  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.554283  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:37.554304  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:37.554347  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:37.612670  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:37.612706  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:37.629187  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:37.629218  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:37.694612  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:37.685617   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.686619   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.688268   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.688681   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.690407   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:37.685617   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.686619   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.688268   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.688681   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.690407   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:37.694640  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:37.694653  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:37.719952  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:37.719988  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:40.252201  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:40.265281  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:40.265368  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:40.289761  604010 cri.go:89] found id: ""
	I1213 11:56:40.289841  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.289865  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:40.289885  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:40.289969  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:40.314886  604010 cri.go:89] found id: ""
	I1213 11:56:40.314911  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.314920  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:40.314928  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:40.314988  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:40.340433  604010 cri.go:89] found id: ""
	I1213 11:56:40.340460  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.340469  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:40.340475  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:40.340535  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:40.369630  604010 cri.go:89] found id: ""
	I1213 11:56:40.369657  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.369666  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:40.369672  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:40.369730  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:40.396456  604010 cri.go:89] found id: ""
	I1213 11:56:40.396480  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.396489  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:40.396495  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:40.396550  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:40.420915  604010 cri.go:89] found id: ""
	I1213 11:56:40.420982  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.420996  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:40.421004  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:40.421067  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:40.445305  604010 cri.go:89] found id: ""
	I1213 11:56:40.445339  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.445349  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:40.445355  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:40.445423  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:40.470359  604010 cri.go:89] found id: ""
	I1213 11:56:40.470396  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.470406  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:40.470415  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:40.470428  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:40.529991  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:40.530029  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:40.545704  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:40.545785  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:40.614385  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:40.605002   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.605654   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.608020   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.608670   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.609867   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:40.605002   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.605654   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.608020   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.608670   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.609867   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:40.614411  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:40.614423  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:40.640189  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:40.640226  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:43.171206  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:43.187532  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:43.187604  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:43.255773  604010 cri.go:89] found id: ""
	I1213 11:56:43.255816  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.255826  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:43.255833  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:43.255893  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:43.282066  604010 cri.go:89] found id: ""
	I1213 11:56:43.282095  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.282104  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:43.282110  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:43.282169  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:43.307994  604010 cri.go:89] found id: ""
	I1213 11:56:43.308022  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.308031  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:43.308037  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:43.308094  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:43.333649  604010 cri.go:89] found id: ""
	I1213 11:56:43.333682  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.333692  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:43.333699  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:43.333761  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:43.364007  604010 cri.go:89] found id: ""
	I1213 11:56:43.364037  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.364045  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:43.364052  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:43.364110  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:43.389343  604010 cri.go:89] found id: ""
	I1213 11:56:43.389381  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.389389  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:43.389396  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:43.389466  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:43.414572  604010 cri.go:89] found id: ""
	I1213 11:56:43.414608  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.414618  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:43.414624  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:43.414711  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:43.439971  604010 cri.go:89] found id: ""
	I1213 11:56:43.439999  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.440008  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:43.440018  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:43.440034  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:43.455350  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:43.455380  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:43.518971  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:43.510133   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.510875   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.512575   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.513204   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.514989   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:43.510133   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.510875   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.512575   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.513204   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.514989   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:43.519004  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:43.519017  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:43.543826  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:43.543863  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:43.571534  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:43.571561  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:46.127908  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:46.138548  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:46.138627  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:46.177176  604010 cri.go:89] found id: ""
	I1213 11:56:46.177205  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.177214  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:46.177220  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:46.177280  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:46.250872  604010 cri.go:89] found id: ""
	I1213 11:56:46.250897  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.250906  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:46.250913  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:46.250972  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:46.276982  604010 cri.go:89] found id: ""
	I1213 11:56:46.277008  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.277020  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:46.277026  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:46.277086  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:46.308722  604010 cri.go:89] found id: ""
	I1213 11:56:46.308745  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.308754  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:46.308760  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:46.308819  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:46.333457  604010 cri.go:89] found id: ""
	I1213 11:56:46.333479  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.333488  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:46.333495  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:46.333551  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:46.361010  604010 cri.go:89] found id: ""
	I1213 11:56:46.361034  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.361042  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:46.361049  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:46.361107  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:46.385580  604010 cri.go:89] found id: ""
	I1213 11:56:46.385608  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.385625  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:46.385631  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:46.385689  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:46.410013  604010 cri.go:89] found id: ""
	I1213 11:56:46.410041  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.410050  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:46.410059  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:46.410071  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:46.474489  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:46.465232   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.465851   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.467612   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.468248   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.469990   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:46.465232   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.465851   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.467612   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.468248   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.469990   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:46.474512  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:46.474525  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:46.499926  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:46.499961  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:46.529519  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:46.529543  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:46.585780  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:46.585816  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:49.102338  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:49.113041  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:49.113164  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:49.137484  604010 cri.go:89] found id: ""
	I1213 11:56:49.137527  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.137536  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:49.137543  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:49.137633  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:49.176305  604010 cri.go:89] found id: ""
	I1213 11:56:49.176345  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.176354  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:49.176360  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:49.176445  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:49.216965  604010 cri.go:89] found id: ""
	I1213 11:56:49.216992  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.217001  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:49.217007  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:49.217076  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:49.262147  604010 cri.go:89] found id: ""
	I1213 11:56:49.262226  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.262256  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:49.262277  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:49.262367  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:49.292097  604010 cri.go:89] found id: ""
	I1213 11:56:49.292124  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.292133  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:49.292140  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:49.292195  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:49.316193  604010 cri.go:89] found id: ""
	I1213 11:56:49.316219  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.316228  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:49.316235  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:49.316293  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:49.341385  604010 cri.go:89] found id: ""
	I1213 11:56:49.341411  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.341421  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:49.341434  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:49.341503  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:49.365851  604010 cri.go:89] found id: ""
	I1213 11:56:49.365874  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.365883  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:49.365892  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:49.365903  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:49.381508  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:49.381537  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:49.444383  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:49.436163   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.436758   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.438415   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.438958   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.440549   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:49.436163   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.436758   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.438415   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.438958   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.440549   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:49.444406  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:49.444419  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:49.469593  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:49.469636  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:49.497881  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:49.497912  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:52.053968  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:52.065301  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:52.065418  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:52.096894  604010 cri.go:89] found id: ""
	I1213 11:56:52.096966  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.096988  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:52.097007  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:52.097097  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:52.124148  604010 cri.go:89] found id: ""
	I1213 11:56:52.124173  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.124186  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:52.124193  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:52.124306  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:52.160416  604010 cri.go:89] found id: ""
	I1213 11:56:52.160439  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.160448  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:52.160455  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:52.160513  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:52.200069  604010 cri.go:89] found id: ""
	I1213 11:56:52.200095  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.200104  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:52.200111  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:52.200174  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:52.263224  604010 cri.go:89] found id: ""
	I1213 11:56:52.263295  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.263310  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:52.263318  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:52.263375  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:52.288649  604010 cri.go:89] found id: ""
	I1213 11:56:52.288675  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.288684  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:52.288691  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:52.288754  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:52.316561  604010 cri.go:89] found id: ""
	I1213 11:56:52.316588  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.316596  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:52.316603  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:52.316660  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:52.341885  604010 cri.go:89] found id: ""
	I1213 11:56:52.341909  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.341918  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:52.341927  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:52.341938  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:52.397001  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:52.397038  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:52.415607  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:52.415635  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:52.493248  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:52.484194   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.484676   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.486433   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.486904   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.488650   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:52.484194   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.484676   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.486433   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.486904   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.488650   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:52.493274  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:52.493288  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:52.518551  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:52.518588  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:55.047907  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:55.059302  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:55.059421  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:55.085237  604010 cri.go:89] found id: ""
	I1213 11:56:55.085271  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.085281  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:55.085288  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:55.085362  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:55.112434  604010 cri.go:89] found id: ""
	I1213 11:56:55.112462  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.112475  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:55.112482  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:55.112544  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:55.138067  604010 cri.go:89] found id: ""
	I1213 11:56:55.138101  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.138110  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:55.138117  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:55.138184  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:55.179401  604010 cri.go:89] found id: ""
	I1213 11:56:55.179522  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.179548  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:55.179588  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:55.179766  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:55.234369  604010 cri.go:89] found id: ""
	I1213 11:56:55.234462  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.234499  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:55.234544  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:55.234676  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:55.277189  604010 cri.go:89] found id: ""
	I1213 11:56:55.277271  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.277294  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:55.277314  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:55.277416  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:55.310856  604010 cri.go:89] found id: ""
	I1213 11:56:55.310933  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.310949  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:55.310958  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:55.311020  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:55.337357  604010 cri.go:89] found id: ""
	I1213 11:56:55.337453  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.337468  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:55.337478  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:55.337490  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:55.392569  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:55.392607  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:55.408576  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:55.408608  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:55.471726  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:55.463854   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.464422   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.465928   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.466440   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.467966   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:55.463854   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.464422   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.465928   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.466440   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.467966   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:55.471749  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:55.471762  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:55.497230  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:55.497266  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:58.026521  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:58.040495  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:58.040579  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:58.067542  604010 cri.go:89] found id: ""
	I1213 11:56:58.067567  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.067576  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:58.067583  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:58.067649  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:58.092616  604010 cri.go:89] found id: ""
	I1213 11:56:58.092642  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.092651  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:58.092657  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:58.092714  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:58.117533  604010 cri.go:89] found id: ""
	I1213 11:56:58.117561  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.117572  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:58.117578  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:58.117669  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:58.143441  604010 cri.go:89] found id: ""
	I1213 11:56:58.143465  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.143474  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:58.143481  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:58.143540  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:58.191063  604010 cri.go:89] found id: ""
	I1213 11:56:58.191086  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.191096  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:58.191102  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:58.191175  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:58.233666  604010 cri.go:89] found id: ""
	I1213 11:56:58.233709  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.233727  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:58.233734  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:58.233805  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:58.285997  604010 cri.go:89] found id: ""
	I1213 11:56:58.286020  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.286029  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:58.286035  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:58.286099  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:58.313519  604010 cri.go:89] found id: ""
	I1213 11:56:58.313544  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.313553  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:58.313570  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:58.313581  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:58.372174  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:58.372208  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:58.387775  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:58.387803  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:58.457676  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:58.448571   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.449279   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.451118   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.451644   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.453241   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:58.448571   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.449279   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.451118   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.451644   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.453241   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:58.457698  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:58.457711  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:58.482922  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:58.482956  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:01.016291  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:01.027467  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:01.027540  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:01.061002  604010 cri.go:89] found id: ""
	I1213 11:57:01.061026  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.061035  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:01.061041  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:01.061099  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:01.090375  604010 cri.go:89] found id: ""
	I1213 11:57:01.090403  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.090412  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:01.090418  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:01.090476  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:01.118417  604010 cri.go:89] found id: ""
	I1213 11:57:01.118441  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.118450  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:01.118456  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:01.118521  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:01.147901  604010 cri.go:89] found id: ""
	I1213 11:57:01.147929  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.147938  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:01.147946  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:01.148009  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:01.207604  604010 cri.go:89] found id: ""
	I1213 11:57:01.207681  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.207708  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:01.207727  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:01.207818  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:01.263340  604010 cri.go:89] found id: ""
	I1213 11:57:01.263407  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.263428  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:01.263446  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:01.263531  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:01.296139  604010 cri.go:89] found id: ""
	I1213 11:57:01.296213  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.296231  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:01.296242  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:01.296313  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:01.323150  604010 cri.go:89] found id: ""
	I1213 11:57:01.323175  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.323185  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:01.323194  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:01.323206  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:01.351631  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:01.351659  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:01.410361  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:01.410398  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:01.426884  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:01.426921  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:01.495923  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:01.487940   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.488738   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.490397   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.490777   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.492041   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:01.487940   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.488738   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.490397   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.490777   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.492041   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:01.495947  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:01.495960  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:04.023306  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:04.034376  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:04.034451  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:04.058883  604010 cri.go:89] found id: ""
	I1213 11:57:04.058911  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.058921  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:04.058929  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:04.058990  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:04.084571  604010 cri.go:89] found id: ""
	I1213 11:57:04.084598  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.084607  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:04.084615  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:04.084698  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:04.111492  604010 cri.go:89] found id: ""
	I1213 11:57:04.111518  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.111527  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:04.111534  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:04.111594  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:04.140605  604010 cri.go:89] found id: ""
	I1213 11:57:04.140632  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.140641  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:04.140648  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:04.140709  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:04.170556  604010 cri.go:89] found id: ""
	I1213 11:57:04.170583  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.170592  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:04.170598  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:04.170654  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:04.221024  604010 cri.go:89] found id: ""
	I1213 11:57:04.221047  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.221056  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:04.221062  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:04.221120  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:04.258557  604010 cri.go:89] found id: ""
	I1213 11:57:04.258583  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.258601  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:04.258608  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:04.258667  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:04.286096  604010 cri.go:89] found id: ""
	I1213 11:57:04.286121  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.286130  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:04.286140  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:04.286154  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:04.342856  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:04.342892  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:04.359212  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:04.359247  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:04.426841  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:04.417916   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.418505   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.420627   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.421110   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.422742   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:04.417916   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.418505   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.420627   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.421110   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.422742   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:04.426863  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:04.426876  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:04.452958  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:04.452999  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:06.985291  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:06.996435  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:06.996506  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:07.027757  604010 cri.go:89] found id: ""
	I1213 11:57:07.027792  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.027802  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:07.027808  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:07.027875  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:07.053033  604010 cri.go:89] found id: ""
	I1213 11:57:07.053059  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.053068  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:07.053075  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:07.053135  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:07.077293  604010 cri.go:89] found id: ""
	I1213 11:57:07.077320  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.077330  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:07.077336  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:07.077400  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:07.101590  604010 cri.go:89] found id: ""
	I1213 11:57:07.101615  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.101630  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:07.101636  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:07.101693  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:07.129837  604010 cri.go:89] found id: ""
	I1213 11:57:07.129867  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.129877  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:07.129883  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:07.129943  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:07.155693  604010 cri.go:89] found id: ""
	I1213 11:57:07.155719  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.155729  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:07.155735  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:07.155799  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:07.208290  604010 cri.go:89] found id: ""
	I1213 11:57:07.208318  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.208327  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:07.208334  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:07.208398  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:07.260450  604010 cri.go:89] found id: ""
	I1213 11:57:07.260475  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.260485  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:07.260494  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:07.260505  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:07.317882  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:07.317918  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:07.334495  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:07.334524  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:07.403490  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:07.393965   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.394975   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.396603   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.397190   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.398983   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:07.393965   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.394975   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.396603   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.397190   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.398983   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:07.403516  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:07.403531  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:07.428864  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:07.428901  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:09.962852  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:09.973890  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:09.973963  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:10.008764  604010 cri.go:89] found id: ""
	I1213 11:57:10.008791  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.008801  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:10.008808  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:10.008881  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:10.042627  604010 cri.go:89] found id: ""
	I1213 11:57:10.042655  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.042667  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:10.042674  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:10.042762  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:10.070196  604010 cri.go:89] found id: ""
	I1213 11:57:10.070222  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.070231  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:10.070238  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:10.070304  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:10.097458  604010 cri.go:89] found id: ""
	I1213 11:57:10.097484  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.097493  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:10.097500  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:10.097559  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:10.124061  604010 cri.go:89] found id: ""
	I1213 11:57:10.124087  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.124095  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:10.124101  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:10.124158  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:10.153659  604010 cri.go:89] found id: ""
	I1213 11:57:10.153696  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.153705  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:10.153713  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:10.153792  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:10.226910  604010 cri.go:89] found id: ""
	I1213 11:57:10.226938  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.226947  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:10.226953  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:10.227010  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:10.265652  604010 cri.go:89] found id: ""
	I1213 11:57:10.265676  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.265685  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:10.265695  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:10.265707  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:10.332797  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:10.323569   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.325115   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.325998   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.326908   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.328530   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:10.323569   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.325115   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.325998   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.326908   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.328530   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:10.332820  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:10.332832  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:10.357553  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:10.357592  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:10.391809  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:10.391838  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:10.447255  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:10.447293  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:12.963670  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:12.974670  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:12.974767  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:13.006230  604010 cri.go:89] found id: ""
	I1213 11:57:13.006259  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.006268  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:13.006275  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:13.006340  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:13.031301  604010 cri.go:89] found id: ""
	I1213 11:57:13.031325  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.031334  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:13.031340  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:13.031396  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:13.055897  604010 cri.go:89] found id: ""
	I1213 11:57:13.055927  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.055936  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:13.055942  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:13.056003  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:13.081708  604010 cri.go:89] found id: ""
	I1213 11:57:13.081733  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.081748  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:13.081755  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:13.081812  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:13.111812  604010 cri.go:89] found id: ""
	I1213 11:57:13.111885  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.111900  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:13.111909  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:13.111971  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:13.136957  604010 cri.go:89] found id: ""
	I1213 11:57:13.136992  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.137001  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:13.137025  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:13.137099  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:13.180320  604010 cri.go:89] found id: ""
	I1213 11:57:13.180354  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.180363  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:13.180370  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:13.180438  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:13.232992  604010 cri.go:89] found id: ""
	I1213 11:57:13.233027  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.233037  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:13.233047  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:13.233060  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:13.306234  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:13.297958   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.298476   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.299586   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.299955   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.301394   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:13.297958   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.298476   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.299586   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.299955   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.301394   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:13.306257  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:13.306272  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:13.331798  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:13.331837  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:13.364219  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:13.364248  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:13.419158  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:13.419191  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:15.935716  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:15.946701  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:15.946796  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:15.972298  604010 cri.go:89] found id: ""
	I1213 11:57:15.972375  604010 logs.go:282] 0 containers: []
	W1213 11:57:15.972392  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:15.972399  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:15.972468  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:15.997435  604010 cri.go:89] found id: ""
	I1213 11:57:15.997458  604010 logs.go:282] 0 containers: []
	W1213 11:57:15.997467  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:15.997474  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:15.997540  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:16.026069  604010 cri.go:89] found id: ""
	I1213 11:57:16.026107  604010 logs.go:282] 0 containers: []
	W1213 11:57:16.026116  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:16.026123  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:16.026190  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:16.051047  604010 cri.go:89] found id: ""
	I1213 11:57:16.051125  604010 logs.go:282] 0 containers: []
	W1213 11:57:16.051141  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:16.051149  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:16.051209  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:16.076992  604010 cri.go:89] found id: ""
	I1213 11:57:16.077060  604010 logs.go:282] 0 containers: []
	W1213 11:57:16.077086  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:16.077104  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:16.077190  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:16.104719  604010 cri.go:89] found id: ""
	I1213 11:57:16.104788  604010 logs.go:282] 0 containers: []
	W1213 11:57:16.104811  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:16.104830  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:16.104918  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:16.136668  604010 cri.go:89] found id: ""
	I1213 11:57:16.136696  604010 logs.go:282] 0 containers: []
	W1213 11:57:16.136705  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:16.136712  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:16.136772  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:16.184065  604010 cri.go:89] found id: ""
	I1213 11:57:16.184100  604010 logs.go:282] 0 containers: []
	W1213 11:57:16.184111  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:16.184120  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:16.184153  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:16.270928  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:16.270968  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:16.287140  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:16.287175  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:16.357398  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:16.349038   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.349516   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.351357   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.351864   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.353557   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:16.349038   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.349516   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.351357   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.351864   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.353557   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:16.357423  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:16.357435  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:16.381740  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:16.381774  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:18.910619  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:18.921087  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:18.921166  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:18.946478  604010 cri.go:89] found id: ""
	I1213 11:57:18.946503  604010 logs.go:282] 0 containers: []
	W1213 11:57:18.946512  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:18.946519  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:18.946578  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:18.971279  604010 cri.go:89] found id: ""
	I1213 11:57:18.971304  604010 logs.go:282] 0 containers: []
	W1213 11:57:18.971313  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:18.971320  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:18.971378  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:18.996033  604010 cri.go:89] found id: ""
	I1213 11:57:18.996059  604010 logs.go:282] 0 containers: []
	W1213 11:57:18.996068  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:18.996074  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:18.996158  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:19.021977  604010 cri.go:89] found id: ""
	I1213 11:57:19.022006  604010 logs.go:282] 0 containers: []
	W1213 11:57:19.022015  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:19.022024  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:19.022086  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:19.046193  604010 cri.go:89] found id: ""
	I1213 11:57:19.046221  604010 logs.go:282] 0 containers: []
	W1213 11:57:19.046230  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:19.046236  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:19.046297  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:19.070868  604010 cri.go:89] found id: ""
	I1213 11:57:19.070895  604010 logs.go:282] 0 containers: []
	W1213 11:57:19.070904  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:19.070911  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:19.071001  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:19.096253  604010 cri.go:89] found id: ""
	I1213 11:57:19.096276  604010 logs.go:282] 0 containers: []
	W1213 11:57:19.096285  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:19.096292  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:19.096373  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:19.121131  604010 cri.go:89] found id: ""
	I1213 11:57:19.121167  604010 logs.go:282] 0 containers: []
	W1213 11:57:19.121177  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:19.121186  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:19.121216  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:19.208507  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:19.190547   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.191444   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.193889   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.194572   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.199234   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:19.190547   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.191444   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.193889   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.194572   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.199234   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:19.208539  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:19.208553  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:19.237572  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:19.237656  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:19.276423  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:19.276448  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:19.334610  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:19.334648  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:21.851744  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:21.861936  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:21.861999  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:21.885880  604010 cri.go:89] found id: ""
	I1213 11:57:21.885901  604010 logs.go:282] 0 containers: []
	W1213 11:57:21.885909  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:21.885916  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:21.885971  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:21.909866  604010 cri.go:89] found id: ""
	I1213 11:57:21.909889  604010 logs.go:282] 0 containers: []
	W1213 11:57:21.909898  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:21.909904  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:21.909961  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:21.934547  604010 cri.go:89] found id: ""
	I1213 11:57:21.934576  604010 logs.go:282] 0 containers: []
	W1213 11:57:21.934585  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:21.934591  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:21.934651  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:21.959889  604010 cri.go:89] found id: ""
	I1213 11:57:21.959915  604010 logs.go:282] 0 containers: []
	W1213 11:57:21.959925  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:21.959932  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:21.959988  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:21.989023  604010 cri.go:89] found id: ""
	I1213 11:57:21.989099  604010 logs.go:282] 0 containers: []
	W1213 11:57:21.989134  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:21.989159  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:21.989243  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:22.019806  604010 cri.go:89] found id: ""
	I1213 11:57:22.019848  604010 logs.go:282] 0 containers: []
	W1213 11:57:22.019861  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:22.019868  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:22.019934  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:22.044814  604010 cri.go:89] found id: ""
	I1213 11:57:22.044841  604010 logs.go:282] 0 containers: []
	W1213 11:57:22.044852  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:22.044858  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:22.044923  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:22.074682  604010 cri.go:89] found id: ""
	I1213 11:57:22.074726  604010 logs.go:282] 0 containers: []
	W1213 11:57:22.074735  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:22.074745  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:22.074757  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:22.150025  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:22.141291   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.141746   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.143484   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.144157   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.146009   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:22.141291   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.141746   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.143484   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.144157   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.146009   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:22.150049  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:22.150062  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:22.178881  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:22.178917  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:22.216709  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:22.216740  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:22.281457  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:22.281489  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:24.798312  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:24.808695  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:24.808764  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:24.835809  604010 cri.go:89] found id: ""
	I1213 11:57:24.835839  604010 logs.go:282] 0 containers: []
	W1213 11:57:24.835848  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:24.835855  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:24.835913  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:24.864535  604010 cri.go:89] found id: ""
	I1213 11:57:24.864560  604010 logs.go:282] 0 containers: []
	W1213 11:57:24.864568  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:24.864574  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:24.864630  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:24.894267  604010 cri.go:89] found id: ""
	I1213 11:57:24.894290  604010 logs.go:282] 0 containers: []
	W1213 11:57:24.894299  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:24.894305  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:24.894364  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:24.923204  604010 cri.go:89] found id: ""
	I1213 11:57:24.923237  604010 logs.go:282] 0 containers: []
	W1213 11:57:24.923248  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:24.923254  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:24.923313  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:24.957663  604010 cri.go:89] found id: ""
	I1213 11:57:24.957689  604010 logs.go:282] 0 containers: []
	W1213 11:57:24.957698  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:24.957705  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:24.957786  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:24.982499  604010 cri.go:89] found id: ""
	I1213 11:57:24.982524  604010 logs.go:282] 0 containers: []
	W1213 11:57:24.982533  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:24.982539  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:24.982596  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:25.013305  604010 cri.go:89] found id: ""
	I1213 11:57:25.013332  604010 logs.go:282] 0 containers: []
	W1213 11:57:25.013342  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:25.013348  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:25.013426  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:25.042403  604010 cri.go:89] found id: ""
	I1213 11:57:25.042429  604010 logs.go:282] 0 containers: []
	W1213 11:57:25.042440  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:25.042450  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:25.042462  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:25.110074  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:25.100728   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.101372   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.103156   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.103840   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.106138   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:25.100728   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.101372   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.103156   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.103840   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.106138   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:25.110097  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:25.110109  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:25.136135  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:25.136175  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:25.187750  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:25.187781  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:25.269417  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:25.269496  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:27.795410  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:27.806308  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:27.806393  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:27.833178  604010 cri.go:89] found id: ""
	I1213 11:57:27.833204  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.833213  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:27.833220  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:27.833280  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:27.864759  604010 cri.go:89] found id: ""
	I1213 11:57:27.864790  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.864800  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:27.864807  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:27.864870  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:27.894576  604010 cri.go:89] found id: ""
	I1213 11:57:27.894643  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.894668  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:27.894722  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:27.894809  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:27.919695  604010 cri.go:89] found id: ""
	I1213 11:57:27.919720  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.919728  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:27.919735  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:27.919809  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:27.944128  604010 cri.go:89] found id: ""
	I1213 11:57:27.944152  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.944161  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:27.944168  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:27.944247  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:27.968369  604010 cri.go:89] found id: ""
	I1213 11:57:27.968393  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.968402  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:27.968409  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:27.968507  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:27.997345  604010 cri.go:89] found id: ""
	I1213 11:57:27.997372  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.997381  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:27.997388  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:27.997451  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:28.029787  604010 cri.go:89] found id: ""
	I1213 11:57:28.029815  604010 logs.go:282] 0 containers: []
	W1213 11:57:28.029825  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:28.029837  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:28.029851  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:28.059897  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:28.059930  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:28.116398  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:28.116433  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:28.133239  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:28.133269  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:28.257725  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:28.249038   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.249625   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.251202   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.251730   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.253377   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:28.249038   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.249625   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.251202   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.251730   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.253377   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:28.257746  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:28.257758  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:30.784544  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:30.795049  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:30.795122  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:30.819394  604010 cri.go:89] found id: ""
	I1213 11:57:30.819419  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.819427  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:30.819434  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:30.819491  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:30.843159  604010 cri.go:89] found id: ""
	I1213 11:57:30.843184  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.843193  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:30.843199  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:30.843254  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:30.869845  604010 cri.go:89] found id: ""
	I1213 11:57:30.869867  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.869876  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:30.869885  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:30.869941  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:30.896812  604010 cri.go:89] found id: ""
	I1213 11:57:30.896836  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.896845  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:30.896853  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:30.896913  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:30.921770  604010 cri.go:89] found id: ""
	I1213 11:57:30.921794  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.921804  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:30.921810  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:30.921867  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:30.948842  604010 cri.go:89] found id: ""
	I1213 11:57:30.948869  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.948878  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:30.948885  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:30.948941  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:30.975761  604010 cri.go:89] found id: ""
	I1213 11:57:30.975785  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.975794  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:30.975800  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:30.975861  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:31.009297  604010 cri.go:89] found id: ""
	I1213 11:57:31.009324  604010 logs.go:282] 0 containers: []
	W1213 11:57:31.009333  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:31.009344  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:31.009357  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:31.026148  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:31.026228  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:31.092501  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:31.083099   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.083809   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.085589   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.086335   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.087969   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:31.083099   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.083809   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.085589   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.086335   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.087969   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:31.092527  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:31.092540  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:31.119062  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:31.119100  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:31.148109  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:31.148140  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:33.733415  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:33.744879  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:33.744947  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:33.769975  604010 cri.go:89] found id: ""
	I1213 11:57:33.770002  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.770012  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:33.770019  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:33.770118  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:33.795564  604010 cri.go:89] found id: ""
	I1213 11:57:33.795587  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.795595  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:33.795602  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:33.795658  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:33.820165  604010 cri.go:89] found id: ""
	I1213 11:57:33.820189  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.820197  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:33.820205  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:33.820266  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:33.850474  604010 cri.go:89] found id: ""
	I1213 11:57:33.850496  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.850504  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:33.850511  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:33.850571  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:33.875577  604010 cri.go:89] found id: ""
	I1213 11:57:33.875599  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.875613  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:33.875620  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:33.875676  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:33.899672  604010 cri.go:89] found id: ""
	I1213 11:57:33.899696  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.899704  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:33.899711  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:33.899771  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:33.924330  604010 cri.go:89] found id: ""
	I1213 11:57:33.924353  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.924363  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:33.924369  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:33.924426  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:33.948447  604010 cri.go:89] found id: ""
	I1213 11:57:33.948470  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.948479  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:33.948489  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:33.948500  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:34.007962  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:34.008002  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:34.025302  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:34.025333  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:34.092523  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:34.083642   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.084406   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.086056   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.086792   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.088528   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:34.083642   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.084406   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.086056   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.086792   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.088528   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:34.092559  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:34.092571  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:34.118672  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:34.118743  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:36.651173  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:36.662055  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:36.662135  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:36.690956  604010 cri.go:89] found id: ""
	I1213 11:57:36.690981  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.690990  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:36.690997  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:36.691067  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:36.716966  604010 cri.go:89] found id: ""
	I1213 11:57:36.716989  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.716998  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:36.717004  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:36.717063  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:36.741609  604010 cri.go:89] found id: ""
	I1213 11:57:36.741651  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.741661  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:36.741667  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:36.741736  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:36.766862  604010 cri.go:89] found id: ""
	I1213 11:57:36.766898  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.766907  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:36.766914  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:36.766978  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:36.792075  604010 cri.go:89] found id: ""
	I1213 11:57:36.792103  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.792112  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:36.792119  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:36.792198  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:36.817506  604010 cri.go:89] found id: ""
	I1213 11:57:36.817540  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.817549  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:36.817558  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:36.817624  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:36.842603  604010 cri.go:89] found id: ""
	I1213 11:57:36.842627  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.842635  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:36.842641  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:36.842721  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:36.868253  604010 cri.go:89] found id: ""
	I1213 11:57:36.868276  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.868286  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:36.868295  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:36.868307  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:36.925033  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:36.925067  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:36.941121  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:36.941202  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:37.010945  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:36.998940   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.000295   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.000838   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.002747   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.003200   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:36.998940   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.000295   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.000838   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.002747   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.003200   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:37.010971  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:37.010986  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:37.039679  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:37.039717  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:39.569521  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:39.580209  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:39.580283  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:39.607577  604010 cri.go:89] found id: ""
	I1213 11:57:39.607609  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.607618  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:39.607625  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:39.607684  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:39.632984  604010 cri.go:89] found id: ""
	I1213 11:57:39.633007  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.633016  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:39.633022  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:39.633079  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:39.660977  604010 cri.go:89] found id: ""
	I1213 11:57:39.661006  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.661016  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:39.661022  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:39.661083  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:39.685387  604010 cri.go:89] found id: ""
	I1213 11:57:39.685414  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.685423  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:39.685430  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:39.685488  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:39.711315  604010 cri.go:89] found id: ""
	I1213 11:57:39.711354  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.711364  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:39.711370  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:39.711434  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:39.736665  604010 cri.go:89] found id: ""
	I1213 11:57:39.736691  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.736700  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:39.736707  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:39.736765  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:39.761215  604010 cri.go:89] found id: ""
	I1213 11:57:39.761240  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.761250  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:39.761257  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:39.761317  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:39.785612  604010 cri.go:89] found id: ""
	I1213 11:57:39.785635  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.785667  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:39.785677  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:39.785688  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:39.818169  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:39.818198  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:39.876172  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:39.876207  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:39.893614  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:39.893697  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:39.961561  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:39.953062   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.953798   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.955462   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.955793   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.957262   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:39.953062   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.953798   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.955462   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.955793   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.957262   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:39.961582  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:39.961598  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:42.487536  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:42.498423  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:42.498495  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:42.526754  604010 cri.go:89] found id: ""
	I1213 11:57:42.526784  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.526793  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:42.526800  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:42.526866  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:42.557909  604010 cri.go:89] found id: ""
	I1213 11:57:42.557938  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.557948  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:42.557955  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:42.558012  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:42.583283  604010 cri.go:89] found id: ""
	I1213 11:57:42.583311  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.583319  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:42.583325  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:42.583417  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:42.612201  604010 cri.go:89] found id: ""
	I1213 11:57:42.612228  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.612238  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:42.612244  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:42.612304  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:42.636897  604010 cri.go:89] found id: ""
	I1213 11:57:42.636926  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.636935  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:42.636942  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:42.637003  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:42.662077  604010 cri.go:89] found id: ""
	I1213 11:57:42.662101  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.662109  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:42.662116  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:42.662181  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:42.689090  604010 cri.go:89] found id: ""
	I1213 11:57:42.689117  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.689126  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:42.689132  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:42.689194  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:42.714186  604010 cri.go:89] found id: ""
	I1213 11:57:42.714220  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.714229  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:42.714239  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:42.714253  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:42.730012  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:42.730043  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:42.793528  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:42.784227   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.785106   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.787066   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.787860   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.789513   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:42.784227   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.785106   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.787066   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.787860   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.789513   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:42.793550  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:42.793562  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:42.820504  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:42.820540  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:42.850739  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:42.850772  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:45.416253  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:45.428104  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:45.428174  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:45.486919  604010 cri.go:89] found id: ""
	I1213 11:57:45.486943  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.486952  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:45.486959  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:45.487018  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:45.518438  604010 cri.go:89] found id: ""
	I1213 11:57:45.518466  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.518475  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:45.518482  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:45.518539  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:45.543147  604010 cri.go:89] found id: ""
	I1213 11:57:45.543174  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.543183  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:45.543189  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:45.543247  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:45.568184  604010 cri.go:89] found id: ""
	I1213 11:57:45.568210  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.568219  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:45.568226  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:45.568283  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:45.597036  604010 cri.go:89] found id: ""
	I1213 11:57:45.597062  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.597072  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:45.597078  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:45.597140  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:45.625538  604010 cri.go:89] found id: ""
	I1213 11:57:45.625563  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.625572  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:45.625579  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:45.625664  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:45.650305  604010 cri.go:89] found id: ""
	I1213 11:57:45.650340  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.650350  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:45.650356  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:45.650415  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:45.674642  604010 cri.go:89] found id: ""
	I1213 11:57:45.674668  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.674677  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:45.674723  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:45.674736  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:45.737984  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:45.729194   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.729808   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.731387   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.731876   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.733423   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:45.729194   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.729808   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.731387   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.731876   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.733423   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:45.738014  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:45.738030  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:45.764253  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:45.764293  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:45.794872  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:45.794900  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:45.852148  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:45.852181  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:48.369680  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:48.381452  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:48.381527  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:48.406963  604010 cri.go:89] found id: ""
	I1213 11:57:48.406989  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.406998  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:48.407004  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:48.407069  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:48.453016  604010 cri.go:89] found id: ""
	I1213 11:57:48.453043  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.453052  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:48.453060  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:48.453120  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:48.512775  604010 cri.go:89] found id: ""
	I1213 11:57:48.512806  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.512815  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:48.512821  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:48.512879  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:48.538032  604010 cri.go:89] found id: ""
	I1213 11:57:48.538055  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.538064  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:48.538070  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:48.538129  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:48.562781  604010 cri.go:89] found id: ""
	I1213 11:57:48.562815  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.562831  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:48.562841  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:48.562899  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:48.592224  604010 cri.go:89] found id: ""
	I1213 11:57:48.592249  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.592258  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:48.592265  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:48.592324  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:48.616499  604010 cri.go:89] found id: ""
	I1213 11:57:48.616524  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.616533  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:48.616540  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:48.616604  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:48.641140  604010 cri.go:89] found id: ""
	I1213 11:57:48.641164  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.641173  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:48.641183  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:48.641193  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:48.667031  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:48.667069  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:48.696402  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:48.696431  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:48.752046  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:48.752080  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:48.768352  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:48.768382  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:48.835752  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:48.828038   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.828514   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.830127   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.830542   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.831979   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:48.828038   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.828514   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.830127   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.830542   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.831979   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:51.337160  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:51.349596  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:51.349697  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:51.384310  604010 cri.go:89] found id: ""
	I1213 11:57:51.384341  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.384350  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:51.384358  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:51.384415  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:51.409502  604010 cri.go:89] found id: ""
	I1213 11:57:51.409523  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.409532  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:51.409539  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:51.409595  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:51.444866  604010 cri.go:89] found id: ""
	I1213 11:57:51.444887  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.444896  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:51.444901  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:51.444957  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:51.498878  604010 cri.go:89] found id: ""
	I1213 11:57:51.498900  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.498908  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:51.498915  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:51.498970  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:51.532054  604010 cri.go:89] found id: ""
	I1213 11:57:51.532082  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.532091  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:51.532098  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:51.532159  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:51.561798  604010 cri.go:89] found id: ""
	I1213 11:57:51.561833  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.561842  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:51.561849  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:51.561906  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:51.586723  604010 cri.go:89] found id: ""
	I1213 11:57:51.586798  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.586820  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:51.586843  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:51.586951  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:51.612513  604010 cri.go:89] found id: ""
	I1213 11:57:51.612538  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.612547  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:51.612557  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:51.612569  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:51.628622  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:51.628650  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:51.699783  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:51.691237   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.691797   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.693193   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.693944   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.695717   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:51.691237   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.691797   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.693193   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.693944   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.695717   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:51.699815  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:51.699832  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:51.725055  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:51.725092  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:51.758574  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:51.758604  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:54.315140  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:54.325600  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:54.325693  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:54.352056  604010 cri.go:89] found id: ""
	I1213 11:57:54.352081  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.352089  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:54.352096  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:54.352157  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:54.375586  604010 cri.go:89] found id: ""
	I1213 11:57:54.375611  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.375620  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:54.375626  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:54.375683  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:54.399138  604010 cri.go:89] found id: ""
	I1213 11:57:54.399163  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.399172  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:54.399178  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:54.399234  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:54.439999  604010 cri.go:89] found id: ""
	I1213 11:57:54.440025  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.440033  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:54.440039  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:54.440096  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:54.505093  604010 cri.go:89] found id: ""
	I1213 11:57:54.505124  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.505133  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:54.505140  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:54.505198  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:54.529921  604010 cri.go:89] found id: ""
	I1213 11:57:54.529947  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.529956  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:54.529966  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:54.530029  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:54.556363  604010 cri.go:89] found id: ""
	I1213 11:57:54.556390  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.556399  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:54.556406  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:54.556483  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:54.581531  604010 cri.go:89] found id: ""
	I1213 11:57:54.581556  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.581565  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:54.581574  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:54.581603  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:54.637009  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:54.637043  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:54.652919  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:54.652949  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:54.717113  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:54.708684   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.709580   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.711317   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.711640   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.713137   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:54.708684   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.709580   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.711317   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.711640   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.713137   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:54.717134  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:54.717148  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:54.743116  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:54.743151  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:57.272010  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:57.285875  604010 out.go:203] 
	W1213 11:57:57.288788  604010 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 11:57:57.288838  604010 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 11:57:57.288853  604010 out.go:285] * Related issues:
	* Related issues:
	W1213 11:57:57.288872  604010 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1213 11:57:57.288889  604010 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1213 11:57:57.291728  604010 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p newest-cni-796924 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 105
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-796924
helpers_test.go:244: (dbg) docker inspect newest-cni-796924:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273",
	        "Created": "2025-12-13T11:41:45.560617227Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 604142,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:51:48.770524373Z",
	            "FinishedAt": "2025-12-13T11:51:47.382046067Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273/hostname",
	        "HostsPath": "/var/lib/docker/containers/27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273/hosts",
	        "LogPath": "/var/lib/docker/containers/27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273/27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273-json.log",
	        "Name": "/newest-cni-796924",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-796924:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-796924",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273",
	                "LowerDir": "/var/lib/docker/overlay2/e3273dd39227f26c709bd7f69a32b4360acb8650246414f8098119d5f28e0f70-init/diff:/var/lib/docker/overlay2/5d0ca86320f2c06765995d644a73af30cd6ddb36f5c1a2a6b1ebb24af53cdabd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e3273dd39227f26c709bd7f69a32b4360acb8650246414f8098119d5f28e0f70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e3273dd39227f26c709bd7f69a32b4360acb8650246414f8098119d5f28e0f70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e3273dd39227f26c709bd7f69a32b4360acb8650246414f8098119d5f28e0f70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-796924",
	                "Source": "/var/lib/docker/volumes/newest-cni-796924/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-796924",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-796924",
	                "name.minikube.sigs.k8s.io": "newest-cni-796924",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b9bb40aac9de7cd1274edecaff0f8eaf098acb0d5c0799c0a940ae7311a572ff",
	            "SandboxKey": "/var/run/docker/netns/b9bb40aac9de",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-796924": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:8b:15:a0:38:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "524b54a7afb58fdfadc2532a94da198ca12aafc23248ec4905999b39dfe064e0",
	                    "EndpointID": "b589d458f24f437f5bf8379bb70662db004fdd873d4df2f7211ededbab3c7988",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-796924",
	                        "27aba94e8ede"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-796924 -n newest-cni-796924
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-796924 -n newest-cni-796924: exit status 2 (337.388487ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-796924 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-796924 logs -n 25: (1.87905694s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ image   │ embed-certs-951675 image list --format=json                                                                                                                                                                                                                │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ pause   │ -p embed-certs-951675 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ unpause │ -p embed-certs-951675 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p embed-certs-951675                                                                                                                                                                                                                                      │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p embed-certs-951675                                                                                                                                                                                                                                      │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p disable-driver-mounts-823668                                                                                                                                                                                                                            │ disable-driver-mounts-823668 │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ start   │ -p default-k8s-diff-port-191845 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:40 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-191845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ stop    │ -p default-k8s-diff-port-191845 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-191845 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ start   │ -p default-k8s-diff-port-191845 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:41 UTC │
	│ image   │ default-k8s-diff-port-191845 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ pause   │ -p default-k8s-diff-port-191845 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ unpause │ -p default-k8s-diff-port-191845 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ delete  │ -p default-k8s-diff-port-191845                                                                                                                                                                                                                            │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ delete  │ -p default-k8s-diff-port-191845                                                                                                                                                                                                                            │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ start   │ -p newest-cni-796924 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-333352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ stop    │ -p no-preload-333352 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ addons  │ enable dashboard -p no-preload-333352 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ start   │ -p no-preload-333352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-796924 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │                     │
	│ stop    │ -p newest-cni-796924 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p newest-cni-796924 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p newest-cni-796924 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:51:48
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:51:48.463604  604010 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:51:48.463796  604010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:51:48.463823  604010 out.go:374] Setting ErrFile to fd 2...
	I1213 11:51:48.463842  604010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:51:48.464235  604010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 11:51:48.465119  604010 out.go:368] Setting JSON to false
	I1213 11:51:48.466102  604010 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":16461,"bootTime":1765610247,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 11:51:48.466204  604010 start.go:143] virtualization:  
	I1213 11:51:48.469444  604010 out.go:179] * [newest-cni-796924] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:51:48.473497  604010 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:51:48.473608  604010 notify.go:221] Checking for updates...
	I1213 11:51:48.479464  604010 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:51:48.482541  604010 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:51:48.485448  604010 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 11:51:48.488462  604010 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:51:48.491424  604010 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:51:48.494980  604010 config.go:182] Loaded profile config "newest-cni-796924": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:51:48.495553  604010 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:51:48.518013  604010 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:51:48.518194  604010 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:51:48.596406  604010 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:51:48.586781308 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:51:48.596541  604010 docker.go:319] overlay module found
	I1213 11:51:48.599865  604010 out.go:179] * Using the docker driver based on existing profile
	I1213 11:51:48.602647  604010 start.go:309] selected driver: docker
	I1213 11:51:48.602672  604010 start.go:927] validating driver "docker" against &{Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:51:48.602834  604010 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:51:48.603569  604010 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:51:48.671569  604010 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:51:48.654666754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:51:48.671930  604010 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 11:51:48.671965  604010 cni.go:84] Creating CNI manager for ""
	I1213 11:51:48.672022  604010 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:51:48.672078  604010 start.go:353] cluster config:
	{Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:51:48.675265  604010 out.go:179] * Starting "newest-cni-796924" primary control-plane node in "newest-cni-796924" cluster
	I1213 11:51:48.678207  604010 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 11:51:48.681114  604010 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:51:48.683920  604010 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:51:48.683976  604010 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 11:51:48.683989  604010 cache.go:65] Caching tarball of preloaded images
	I1213 11:51:48.684102  604010 preload.go:238] Found /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 11:51:48.684116  604010 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 11:51:48.684232  604010 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/config.json ...
	I1213 11:51:48.684464  604010 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:51:48.711458  604010 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:51:48.711481  604010 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:51:48.711496  604010 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:51:48.711527  604010 start.go:360] acquireMachinesLock for newest-cni-796924: {Name:mkb23dc851632c47983afd0f3cb215d071a4c6d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:51:48.711588  604010 start.go:364] duration metric: took 38.818µs to acquireMachinesLock for "newest-cni-796924"
	I1213 11:51:48.711608  604010 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:51:48.711613  604010 fix.go:54] fixHost starting: 
	I1213 11:51:48.711888  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:48.735758  604010 fix.go:112] recreateIfNeeded on newest-cni-796924: state=Stopped err=<nil>
	W1213 11:51:48.735799  604010 fix.go:138] unexpected machine state, will restart: <nil>
	W1213 11:51:48.171125  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:50.670988  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:51:48.739083  604010 out.go:252] * Restarting existing docker container for "newest-cni-796924" ...
	I1213 11:51:48.739191  604010 cli_runner.go:164] Run: docker start newest-cni-796924
	I1213 11:51:48.989234  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:49.013708  604010 kic.go:430] container "newest-cni-796924" state is running.
	I1213 11:51:49.014143  604010 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:51:49.035818  604010 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/config.json ...
	I1213 11:51:49.036044  604010 machine.go:94] provisionDockerMachine start ...
	I1213 11:51:49.036107  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:49.066663  604010 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:49.067143  604010 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1213 11:51:49.067157  604010 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:51:49.067832  604010 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47590->127.0.0.1:33440: read: connection reset by peer
	I1213 11:51:52.226322  604010 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-796924
	
	I1213 11:51:52.226353  604010 ubuntu.go:182] provisioning hostname "newest-cni-796924"
	I1213 11:51:52.226417  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:52.244890  604010 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:52.245240  604010 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1213 11:51:52.245259  604010 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-796924 && echo "newest-cni-796924" | sudo tee /etc/hostname
	I1213 11:51:52.409909  604010 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-796924
	
	I1213 11:51:52.410005  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:52.440908  604010 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:52.441219  604010 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1213 11:51:52.441235  604010 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-796924' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-796924/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-796924' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:51:52.595320  604010 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:51:52.595345  604010 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 11:51:52.595378  604010 ubuntu.go:190] setting up certificates
	I1213 11:51:52.595395  604010 provision.go:84] configureAuth start
	I1213 11:51:52.595456  604010 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:51:52.612730  604010 provision.go:143] copyHostCerts
	I1213 11:51:52.612805  604010 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 11:51:52.612815  604010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 11:51:52.612893  604010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 11:51:52.612991  604010 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 11:51:52.612997  604010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 11:51:52.613022  604010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 11:51:52.613072  604010 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 11:51:52.613077  604010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 11:51:52.613099  604010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 11:51:52.613145  604010 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.newest-cni-796924 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-796924]
	I1213 11:51:52.732846  604010 provision.go:177] copyRemoteCerts
	I1213 11:51:52.732930  604010 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:51:52.732973  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:52.750653  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:52.855439  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:51:52.874016  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 11:51:52.892129  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 11:51:52.911103  604010 provision.go:87] duration metric: took 315.684656ms to configureAuth
	I1213 11:51:52.911132  604010 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:51:52.911332  604010 config.go:182] Loaded profile config "newest-cni-796924": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:51:52.911340  604010 machine.go:97] duration metric: took 3.875289031s to provisionDockerMachine
	I1213 11:51:52.911347  604010 start.go:293] postStartSetup for "newest-cni-796924" (driver="docker")
	I1213 11:51:52.911359  604010 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:51:52.911407  604010 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:51:52.911460  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:52.929094  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:53.034971  604010 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:51:53.038558  604010 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:51:53.038590  604010 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:51:53.038602  604010 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 11:51:53.038659  604010 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 11:51:53.038763  604010 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 11:51:53.038874  604010 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:51:53.046532  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:51:53.064751  604010 start.go:296] duration metric: took 153.388066ms for postStartSetup
	I1213 11:51:53.064850  604010 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:51:53.064897  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:53.083055  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:53.186537  604010 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:51:53.194814  604010 fix.go:56] duration metric: took 4.483190974s for fixHost
	I1213 11:51:53.194902  604010 start.go:83] releasing machines lock for "newest-cni-796924", held for 4.483304896s
	I1213 11:51:53.195014  604010 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:51:53.218858  604010 ssh_runner.go:195] Run: cat /version.json
	I1213 11:51:53.218911  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:53.219425  604010 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:51:53.219496  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:53.245887  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:53.248082  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:53.440734  604010 ssh_runner.go:195] Run: systemctl --version
	I1213 11:51:53.447618  604010 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:51:53.452306  604010 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:51:53.452441  604010 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:51:53.460789  604010 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 11:51:53.460813  604010 start.go:496] detecting cgroup driver to use...
	I1213 11:51:53.460876  604010 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:51:53.460961  604010 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:51:53.478830  604010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:51:53.493048  604010 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:51:53.493110  604010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:51:53.509243  604010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:51:53.522928  604010 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:51:53.639237  604010 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:51:53.752852  604010 docker.go:234] disabling docker service ...
	I1213 11:51:53.752960  604010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:51:53.768708  604010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:51:53.782124  604010 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:51:53.903168  604010 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:51:54.054509  604010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:51:54.067985  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:51:54.083550  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 11:51:54.093447  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:51:54.102944  604010 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:51:54.103048  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:51:54.112424  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:51:54.121802  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:51:54.130945  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:51:54.140080  604010 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:51:54.148567  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:51:54.157935  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:51:54.167456  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:51:54.176969  604010 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:51:54.184730  604010 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:51:54.192410  604010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:51:54.297614  604010 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:51:54.415943  604010 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 11:51:54.416062  604010 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 11:51:54.419918  604010 start.go:564] Will wait 60s for crictl version
	I1213 11:51:54.420004  604010 ssh_runner.go:195] Run: which crictl
	I1213 11:51:54.424003  604010 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:51:54.449039  604010 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 11:51:54.449144  604010 ssh_runner.go:195] Run: containerd --version
	I1213 11:51:54.473383  604010 ssh_runner.go:195] Run: containerd --version
	I1213 11:51:54.499419  604010 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 11:51:54.502369  604010 cli_runner.go:164] Run: docker network inspect newest-cni-796924 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:51:54.518648  604010 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 11:51:54.522791  604010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:51:54.535931  604010 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 11:51:54.538956  604010 kubeadm.go:884] updating cluster {Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:51:54.539121  604010 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:51:54.539232  604010 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:51:54.563801  604010 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 11:51:54.563827  604010 containerd.go:534] Images already preloaded, skipping extraction
	I1213 11:51:54.563893  604010 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:51:54.592245  604010 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 11:51:54.592267  604010 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:51:54.592274  604010 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 11:51:54.592392  604010 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-796924 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:51:54.592461  604010 ssh_runner.go:195] Run: sudo crictl info
	I1213 11:51:54.621799  604010 cni.go:84] Creating CNI manager for ""
	I1213 11:51:54.621822  604010 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:51:54.621841  604010 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 11:51:54.621863  604010 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-796924 NodeName:newest-cni-796924 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:51:54.621977  604010 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-796924"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:51:54.622049  604010 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:51:54.629798  604010 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:51:54.629892  604010 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:51:54.637447  604010 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 11:51:54.650384  604010 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:51:54.666817  604010 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 11:51:54.689998  604010 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:51:54.695776  604010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:51:54.710482  604010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:51:54.832824  604010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:51:54.850492  604010 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924 for IP: 192.168.76.2
	I1213 11:51:54.850566  604010 certs.go:195] generating shared ca certs ...
	I1213 11:51:54.850597  604010 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:54.850790  604010 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 11:51:54.850872  604010 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 11:51:54.850895  604010 certs.go:257] generating profile certs ...
	I1213 11:51:54.851026  604010 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.key
	I1213 11:51:54.851129  604010 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key.ced45374
	I1213 11:51:54.851211  604010 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key
	I1213 11:51:54.851379  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 11:51:54.851441  604010 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 11:51:54.851467  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:51:54.851513  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:51:54.851568  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:51:54.851620  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 11:51:54.851698  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:51:54.852295  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:51:54.879994  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:51:54.900131  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:51:54.919515  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:51:54.939840  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 11:51:54.959348  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:51:54.977529  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:51:54.995648  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:51:55.023031  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 11:51:55.043814  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:51:55.063273  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 11:51:55.083198  604010 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:51:55.097732  604010 ssh_runner.go:195] Run: openssl version
	I1213 11:51:55.104458  604010 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 11:51:55.112443  604010 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 11:51:55.120212  604010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 11:51:55.124175  604010 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 11:51:55.124296  604010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 11:51:55.166612  604010 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:51:55.174931  604010 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:55.182763  604010 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:51:55.190655  604010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:55.194550  604010 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:55.194637  604010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:55.235820  604010 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:51:55.243647  604010 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 11:51:55.251252  604010 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 11:51:55.258979  604010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 11:51:55.263040  604010 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 11:51:55.263115  604010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 11:51:55.305815  604010 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:51:55.313358  604010 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:51:55.317228  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:51:55.358360  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:51:55.399354  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:51:55.440616  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:51:55.481788  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:51:55.527783  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:51:55.570548  604010 kubeadm.go:401] StartCluster: {Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:51:55.570648  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 11:51:55.570740  604010 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:51:55.597807  604010 cri.go:89] found id: ""
	I1213 11:51:55.597910  604010 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:51:55.605830  604010 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 11:51:55.605851  604010 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 11:51:55.605907  604010 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 11:51:55.613526  604010 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:51:55.614085  604010 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-796924" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:51:55.614332  604010 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-307042/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-796924" cluster setting kubeconfig missing "newest-cni-796924" context setting]
	I1213 11:51:55.614935  604010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:55.617326  604010 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 11:51:55.625376  604010 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1213 11:51:55.625455  604010 kubeadm.go:602] duration metric: took 19.59756ms to restartPrimaryControlPlane
	I1213 11:51:55.625473  604010 kubeadm.go:403] duration metric: took 54.935084ms to StartCluster
	I1213 11:51:55.625491  604010 settings.go:142] acquiring lock: {Name:mk079e9a25ebbc2c8fbae42d4c6ed096a652c00b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:55.625565  604010 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:51:55.626520  604010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:55.626793  604010 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 11:51:55.627185  604010 config.go:182] Loaded profile config "newest-cni-796924": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:51:55.627271  604010 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 11:51:55.627363  604010 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-796924"
	I1213 11:51:55.627383  604010 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-796924"
	I1213 11:51:55.627413  604010 host.go:66] Checking if "newest-cni-796924" exists ...
	I1213 11:51:55.627434  604010 addons.go:70] Setting dashboard=true in profile "newest-cni-796924"
	I1213 11:51:55.627450  604010 addons.go:239] Setting addon dashboard=true in "newest-cni-796924"
	W1213 11:51:55.627456  604010 addons.go:248] addon dashboard should already be in state true
	I1213 11:51:55.627477  604010 host.go:66] Checking if "newest-cni-796924" exists ...
	I1213 11:51:55.627878  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:55.628091  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:55.628783  604010 addons.go:70] Setting default-storageclass=true in profile "newest-cni-796924"
	I1213 11:51:55.628812  604010 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-796924"
	I1213 11:51:55.629112  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:55.631079  604010 out.go:179] * Verifying Kubernetes components...
	I1213 11:51:55.634139  604010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:51:55.667375  604010 addons.go:239] Setting addon default-storageclass=true in "newest-cni-796924"
	I1213 11:51:55.667423  604010 host.go:66] Checking if "newest-cni-796924" exists ...
	I1213 11:51:55.667842  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:55.688084  604010 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:51:55.691677  604010 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:51:55.691701  604010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 11:51:55.691785  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:55.697906  604010 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 11:51:55.697933  604010 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 11:51:55.698005  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:55.704903  604010 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 11:51:55.707765  604010 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1213 11:51:53.170873  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:55.171466  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:57.171707  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:51:55.710658  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 11:51:55.710701  604010 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 11:51:55.710771  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:55.754330  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:55.772597  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:55.773144  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:55.866635  604010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:51:55.926205  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:51:55.934055  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:51:55.957399  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 11:51:55.957444  604010 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 11:51:55.971225  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 11:51:55.971291  604010 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 11:51:56.007402  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 11:51:56.007444  604010 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 11:51:56.023097  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 11:51:56.023122  604010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 11:51:56.039306  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 11:51:56.039347  604010 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 11:51:56.054865  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 11:51:56.054892  604010 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 11:51:56.069056  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 11:51:56.069097  604010 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 11:51:56.083856  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 11:51:56.083885  604010 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 11:51:56.097577  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:51:56.097600  604010 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 11:51:56.111351  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:51:56.663977  604010 api_server.go:52] waiting for apiserver process to appear ...
	W1213 11:51:56.664058  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.664121  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:51:56.664172  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.664188  604010 retry.go:31] will retry after 289.236479ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.664122  604010 retry.go:31] will retry after 183.877549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:51:56.664453  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.664469  604010 retry.go:31] will retry after 218.899341ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.849187  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:51:56.883801  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:51:56.926668  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.926802  604010 retry.go:31] will retry after 241.089101ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.953849  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:51:56.985603  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.985688  604010 retry.go:31] will retry after 237.809149ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:51:57.026263  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.026297  604010 retry.go:31] will retry after 349.427803ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.164593  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:51:57.169067  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:51:57.224678  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:51:57.234523  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.234624  604010 retry.go:31] will retry after 787.051236ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:51:57.297371  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.297440  604010 retry.go:31] will retry after 317.469921ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.376456  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:51:57.452615  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.452649  604010 retry.go:31] will retry after 679.978714ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.616149  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:51:57.664727  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:51:57.701776  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.701820  604010 retry.go:31] will retry after 682.458958ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.022897  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:51:58.088105  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.088141  604010 retry.go:31] will retry after 475.463602ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.133516  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:51:58.165032  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:51:58.230626  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.230659  604010 retry.go:31] will retry after 634.421741ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.385149  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:51:58.461368  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.461471  604010 retry.go:31] will retry after 859.118132ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:51:59.671078  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:02.171305  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:51:58.564227  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:51:58.633858  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.633891  604010 retry.go:31] will retry after 1.632863719s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.665061  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:51:58.866071  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:51:58.936827  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.936859  604010 retry.go:31] will retry after 1.533813591s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:59.165263  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:51:59.321822  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:51:59.385607  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:59.385640  604010 retry.go:31] will retry after 2.101781304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:59.665231  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:00.164312  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:00.267962  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:52:00.471799  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:52:00.516223  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:00.516306  604010 retry.go:31] will retry after 1.542990826s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:52:00.569718  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:00.569762  604010 retry.go:31] will retry after 1.699392085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:00.664868  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:01.165071  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:01.487701  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:01.556576  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:01.556610  604010 retry.go:31] will retry after 1.79578881s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:01.665032  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:02.059588  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:02.123368  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:02.123421  604010 retry.go:31] will retry after 4.212258745s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:02.164643  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:02.270065  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:52:02.336655  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:02.336687  604010 retry.go:31] will retry after 2.291652574s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:02.665180  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:03.164491  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:03.353076  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:03.415819  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:03.415855  604010 retry.go:31] will retry after 3.520621119s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:52:04.171660  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:06.671628  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:03.664666  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:04.164990  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:04.629361  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:52:04.665164  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:04.695856  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:04.695887  604010 retry.go:31] will retry after 5.092647079s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:05.164583  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:05.665005  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:06.164298  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:06.336728  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:06.399256  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:06.399289  604010 retry.go:31] will retry after 2.548236052s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:06.664733  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:06.937128  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:07.007320  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:07.007359  604010 retry.go:31] will retry after 3.279734506s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:07.164482  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:07.664186  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:08.164259  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:09.170863  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:11.170983  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:08.664905  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:08.947682  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:09.039225  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:09.039255  604010 retry.go:31] will retry after 6.163469341s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:09.164651  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:09.664239  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:09.789499  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:52:09.850576  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:09.850610  604010 retry.go:31] will retry after 3.796434626s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:10.165090  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:10.288047  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:10.355227  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:10.355265  604010 retry.go:31] will retry after 7.010948619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:10.664471  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:11.165062  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:11.664272  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:12.164932  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:12.664657  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:13.164305  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:13.670824  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:15.671074  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:13.647328  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:52:13.664818  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:13.719910  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:13.719942  604010 retry.go:31] will retry after 9.330768854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:14.164344  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:14.664306  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:15.164242  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:15.203030  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:15.263577  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:15.263607  604010 retry.go:31] will retry after 8.190073233s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:15.664266  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:16.165207  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:16.664293  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:17.164467  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:17.367027  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:17.430899  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:17.430934  604010 retry.go:31] will retry after 13.887712507s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:17.664357  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:18.164881  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:18.170945  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:20.670832  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:18.664960  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:19.164308  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:19.665208  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:20.165105  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:20.664287  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:21.164362  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:21.664274  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:22.164288  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:22.665206  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:23.051577  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:52:23.111902  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:23.111935  604010 retry.go:31] will retry after 11.527342508s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:23.165176  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:23.453917  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:23.170872  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:25.171346  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:27.171433  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:23.521291  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:23.521324  604010 retry.go:31] will retry after 14.842315117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:23.664722  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:24.165113  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:24.664242  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:25.164277  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:25.664353  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:26.164245  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:26.664280  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:27.164344  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:27.664260  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:28.164294  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:29.670795  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:31.671822  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:28.664213  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:29.165160  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:29.664269  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:30.165128  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:30.664169  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:31.164314  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:31.319227  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:31.384220  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:31.384257  604010 retry.go:31] will retry after 14.168397615s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:31.664303  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:32.164990  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:32.664299  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:33.164301  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:34.171181  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:36.670803  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:33.664641  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:34.164270  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:34.639887  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:52:34.664451  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:34.713642  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:34.713678  604010 retry.go:31] will retry after 21.545330114s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:35.164160  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:35.665036  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:36.164253  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:36.664233  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:37.164426  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:37.664423  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:38.164585  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:38.364338  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:38.426452  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:38.426486  604010 retry.go:31] will retry after 16.958085374s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:52:38.670951  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:41.170820  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:38.665187  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:39.164590  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:39.665128  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:40.164295  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:40.664289  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:41.164238  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:41.664308  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:42.164562  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:42.664974  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:43.164327  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:43.170883  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:45.172031  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:47.670782  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:43.664236  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:44.164970  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:44.664271  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:45.164423  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:45.553023  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:45.614931  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:45.614965  604010 retry.go:31] will retry after 19.954026213s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:45.665141  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:46.164288  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:46.664717  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:47.164232  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:47.664844  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:48.164283  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:50.171769  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:52.671828  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:48.665063  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:49.164283  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:49.664430  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:50.165168  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:50.665085  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:51.164301  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:51.664309  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:52.165148  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:52.664704  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:53.164339  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:55.170984  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:56.171498  596998 node_ready.go:38] duration metric: took 6m0.001140759s for node "no-preload-333352" to be "Ready" ...
	I1213 11:52:56.174587  596998 out.go:203] 
	W1213 11:52:56.177556  596998 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 11:52:56.177585  596998 out.go:285] * 
	W1213 11:52:56.179740  596998 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:52:56.182759  596998 out.go:203] 
	I1213 11:52:53.664699  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:54.164840  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:54.664218  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:55.165093  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:55.385630  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:55.504689  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:55.504722  604010 retry.go:31] will retry after 37.277266145s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:55.664229  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:52:55.664327  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:52:55.694796  604010 cri.go:89] found id: ""
	I1213 11:52:55.694825  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.694835  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:52:55.694843  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:52:55.694903  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:52:55.723663  604010 cri.go:89] found id: ""
	I1213 11:52:55.723688  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.723697  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:52:55.723704  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:52:55.723763  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:52:55.748991  604010 cri.go:89] found id: ""
	I1213 11:52:55.749019  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.749027  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:52:55.749034  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:52:55.749096  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:52:55.774258  604010 cri.go:89] found id: ""
	I1213 11:52:55.774281  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.774290  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:52:55.774297  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:52:55.774355  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:52:55.798762  604010 cri.go:89] found id: ""
	I1213 11:52:55.798788  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.798796  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:52:55.798802  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:52:55.798861  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:52:55.823037  604010 cri.go:89] found id: ""
	I1213 11:52:55.823063  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.823071  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:52:55.823078  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:52:55.823139  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:52:55.847241  604010 cri.go:89] found id: ""
	I1213 11:52:55.847267  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.847276  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:52:55.847283  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:52:55.847343  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:52:55.872394  604010 cri.go:89] found id: ""
	I1213 11:52:55.872464  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.872488  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:52:55.872505  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:52:55.872518  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:52:55.888592  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:52:55.888623  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:52:55.954582  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:52:55.945990    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.946863    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.948347    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.948763    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.950227    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:52:55.945990    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.946863    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.948347    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.948763    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.950227    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:52:55.954616  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:52:55.954629  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:52:55.979360  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:52:55.979393  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:52:56.015953  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:52:56.015986  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:52:56.262345  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:52:56.407172  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:56.407203  604010 retry.go:31] will retry after 30.096993011s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:58.574217  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:58.585863  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:52:58.585937  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:52:58.613052  604010 cri.go:89] found id: ""
	I1213 11:52:58.613084  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.613094  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:52:58.613102  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:52:58.613187  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:52:58.639217  604010 cri.go:89] found id: ""
	I1213 11:52:58.639241  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.639250  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:52:58.639256  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:52:58.639323  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:52:58.691503  604010 cri.go:89] found id: ""
	I1213 11:52:58.691529  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.691539  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:52:58.691545  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:52:58.691607  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:52:58.739302  604010 cri.go:89] found id: ""
	I1213 11:52:58.739330  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.739339  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:52:58.739345  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:52:58.739407  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:52:58.768957  604010 cri.go:89] found id: ""
	I1213 11:52:58.768985  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.768994  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:52:58.769001  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:52:58.769114  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:52:58.794144  604010 cri.go:89] found id: ""
	I1213 11:52:58.794172  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.794181  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:52:58.794188  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:52:58.794248  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:52:58.818208  604010 cri.go:89] found id: ""
	I1213 11:52:58.818234  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.818243  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:52:58.818250  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:52:58.818307  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:52:58.841575  604010 cri.go:89] found id: ""
	I1213 11:52:58.841600  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.841613  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:52:58.841622  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:52:58.841636  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:52:58.867434  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:52:58.867469  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:52:58.898944  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:52:58.898974  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:52:58.954613  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:52:58.954649  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:52:58.970766  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:52:58.970842  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:52:59.034290  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:52:59.026403    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.026973    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.028473    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.028883    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.030363    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:52:59.026403    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.026973    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.028473    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.028883    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.030363    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:01.534586  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:01.545484  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:01.545555  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:01.572215  604010 cri.go:89] found id: ""
	I1213 11:53:01.572288  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.572302  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:01.572310  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:01.572388  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:01.598159  604010 cri.go:89] found id: ""
	I1213 11:53:01.598188  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.598196  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:01.598203  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:01.598300  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:01.623153  604010 cri.go:89] found id: ""
	I1213 11:53:01.623177  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.623186  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:01.623195  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:01.623261  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:01.649622  604010 cri.go:89] found id: ""
	I1213 11:53:01.649644  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.649652  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:01.649659  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:01.649737  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:01.683094  604010 cri.go:89] found id: ""
	I1213 11:53:01.683119  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.683127  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:01.683133  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:01.683194  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:01.713141  604010 cri.go:89] found id: ""
	I1213 11:53:01.713209  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.713236  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:01.713255  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:01.713329  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:01.743530  604010 cri.go:89] found id: ""
	I1213 11:53:01.743598  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.743644  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:01.743659  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:01.743724  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:01.768540  604010 cri.go:89] found id: ""
	I1213 11:53:01.768567  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.768575  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:01.768585  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:01.768596  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:01.793626  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:01.793664  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:01.820553  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:01.820583  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:01.876734  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:01.876770  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:01.893351  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:01.893425  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:01.982105  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:01.970876    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.971602    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.973230    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.973588    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.977591    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:01.970876    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.971602    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.973230    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.973588    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.977591    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:04.482731  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:04.495226  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:04.495299  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:04.521792  604010 cri.go:89] found id: ""
	I1213 11:53:04.521819  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.521829  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:04.521836  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:04.521900  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:04.553223  604010 cri.go:89] found id: ""
	I1213 11:53:04.553249  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.553258  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:04.553264  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:04.553333  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:04.580024  604010 cri.go:89] found id: ""
	I1213 11:53:04.580049  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.580058  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:04.580064  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:04.580123  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:04.622013  604010 cri.go:89] found id: ""
	I1213 11:53:04.622041  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.622050  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:04.622057  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:04.622117  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:04.646212  604010 cri.go:89] found id: ""
	I1213 11:53:04.646236  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.646245  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:04.646251  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:04.646312  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:04.682129  604010 cri.go:89] found id: ""
	I1213 11:53:04.682156  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.682165  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:04.682171  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:04.682288  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:04.710645  604010 cri.go:89] found id: ""
	I1213 11:53:04.710675  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.710706  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:04.710714  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:04.710781  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:04.742882  604010 cri.go:89] found id: ""
	I1213 11:53:04.742906  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.742915  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:04.742926  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:04.742938  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:04.799010  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:04.799046  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:04.814626  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:04.814655  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:04.884663  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:04.876082    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.876754    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.878443    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.878819    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.880048    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:04.876082    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.876754    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.878443    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.878819    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.880048    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:04.884686  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:04.884717  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:04.910422  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:04.910589  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:05.570211  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:53:05.631760  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:53:05.631794  604010 retry.go:31] will retry after 44.542402529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:53:07.442499  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:07.453537  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:07.453615  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:07.482132  604010 cri.go:89] found id: ""
	I1213 11:53:07.482155  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.482163  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:07.482170  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:07.482229  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:07.506787  604010 cri.go:89] found id: ""
	I1213 11:53:07.506813  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.506823  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:07.506829  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:07.506890  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:07.532425  604010 cri.go:89] found id: ""
	I1213 11:53:07.532449  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.532458  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:07.532465  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:07.532527  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:07.557042  604010 cri.go:89] found id: ""
	I1213 11:53:07.557071  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.557081  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:07.557087  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:07.557147  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:07.581888  604010 cri.go:89] found id: ""
	I1213 11:53:07.581919  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.581934  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:07.581940  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:07.582000  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:07.605619  604010 cri.go:89] found id: ""
	I1213 11:53:07.605646  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.605655  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:07.605661  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:07.605722  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:07.631481  604010 cri.go:89] found id: ""
	I1213 11:53:07.631503  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.631511  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:07.631517  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:07.631574  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:07.656152  604010 cri.go:89] found id: ""
	I1213 11:53:07.656178  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.656187  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:07.656196  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:07.656207  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:07.738199  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:07.729773    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.730173    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.732061    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.732672    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.734342    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:07.729773    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.730173    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.732061    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.732672    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.734342    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:07.738218  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:07.738230  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:07.763561  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:07.763597  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:07.791032  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:07.791059  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:07.846125  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:07.846160  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:10.362523  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:10.372985  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:10.373056  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:10.397984  604010 cri.go:89] found id: ""
	I1213 11:53:10.398016  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.398037  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:10.398044  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:10.398121  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:10.423159  604010 cri.go:89] found id: ""
	I1213 11:53:10.423189  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.423198  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:10.423204  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:10.423266  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:10.447027  604010 cri.go:89] found id: ""
	I1213 11:53:10.447055  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.447064  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:10.447071  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:10.447131  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:10.472026  604010 cri.go:89] found id: ""
	I1213 11:53:10.472049  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.472057  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:10.472064  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:10.472122  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:10.503263  604010 cri.go:89] found id: ""
	I1213 11:53:10.503326  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.503352  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:10.503366  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:10.503440  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:10.532481  604010 cri.go:89] found id: ""
	I1213 11:53:10.532509  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.532518  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:10.532524  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:10.532587  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:10.557219  604010 cri.go:89] found id: ""
	I1213 11:53:10.557258  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.557266  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:10.557273  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:10.557342  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:10.585410  604010 cri.go:89] found id: ""
	I1213 11:53:10.585499  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.585522  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:10.585547  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:10.585587  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:10.611450  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:10.611488  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:10.639926  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:10.639954  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:10.696844  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:10.696881  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:10.713623  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:10.713657  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:10.777642  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:10.768681    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.769607    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.771307    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.771820    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.773703    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:10.768681    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.769607    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.771307    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.771820    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.773703    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:13.278890  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:13.289748  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:13.289817  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:13.317511  604010 cri.go:89] found id: ""
	I1213 11:53:13.317541  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.317550  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:13.317557  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:13.317618  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:13.343404  604010 cri.go:89] found id: ""
	I1213 11:53:13.343432  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.343441  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:13.343448  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:13.343503  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:13.369007  604010 cri.go:89] found id: ""
	I1213 11:53:13.369030  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.369039  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:13.369046  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:13.369108  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:13.395054  604010 cri.go:89] found id: ""
	I1213 11:53:13.395084  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.395094  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:13.395109  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:13.395171  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:13.424003  604010 cri.go:89] found id: ""
	I1213 11:53:13.424030  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.424039  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:13.424046  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:13.424105  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:13.448932  604010 cri.go:89] found id: ""
	I1213 11:53:13.449012  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.449029  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:13.449036  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:13.449112  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:13.474446  604010 cri.go:89] found id: ""
	I1213 11:53:13.474472  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.474481  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:13.474487  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:13.474611  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:13.501117  604010 cri.go:89] found id: ""
	I1213 11:53:13.501141  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.501150  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:13.501159  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:13.501171  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:13.557792  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:13.557829  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:13.574541  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:13.574574  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:13.639676  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:13.629891    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.631830    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.632611    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.634220    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.634886    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:13.629891    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.631830    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.632611    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.634220    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.634886    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:13.639700  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:13.639713  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:13.664830  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:13.664911  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:16.204971  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:16.215560  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:16.215635  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:16.240196  604010 cri.go:89] found id: ""
	I1213 11:53:16.240220  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.240229  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:16.240235  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:16.240293  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:16.265455  604010 cri.go:89] found id: ""
	I1213 11:53:16.265487  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.265497  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:16.265504  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:16.265562  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:16.289852  604010 cri.go:89] found id: ""
	I1213 11:53:16.289875  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.289886  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:16.289893  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:16.289954  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:16.315329  604010 cri.go:89] found id: ""
	I1213 11:53:16.315353  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.315362  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:16.315368  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:16.315433  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:16.346811  604010 cri.go:89] found id: ""
	I1213 11:53:16.346835  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.346844  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:16.346856  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:16.346916  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:16.371580  604010 cri.go:89] found id: ""
	I1213 11:53:16.371608  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.371617  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:16.371623  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:16.371759  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:16.397183  604010 cri.go:89] found id: ""
	I1213 11:53:16.397210  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.397219  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:16.397225  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:16.397286  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:16.422782  604010 cri.go:89] found id: ""
	I1213 11:53:16.422810  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.422821  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:16.422831  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:16.422848  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:16.478667  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:16.478714  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:16.494974  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:16.495011  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:16.560810  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:16.552790    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.553168    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.554711    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.555221    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.556703    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:16.552790    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.553168    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.554711    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.555221    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.556703    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:16.560835  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:16.560849  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:16.586263  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:16.586301  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:19.117851  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:19.128831  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:19.128899  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:19.156507  604010 cri.go:89] found id: ""
	I1213 11:53:19.156537  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.156546  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:19.156553  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:19.156619  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:19.184004  604010 cri.go:89] found id: ""
	I1213 11:53:19.184032  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.184041  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:19.184048  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:19.184108  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:19.210447  604010 cri.go:89] found id: ""
	I1213 11:53:19.210475  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.210485  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:19.210491  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:19.210563  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:19.243214  604010 cri.go:89] found id: ""
	I1213 11:53:19.243241  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.243250  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:19.243257  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:19.243317  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:19.267811  604010 cri.go:89] found id: ""
	I1213 11:53:19.267835  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.267845  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:19.267851  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:19.267912  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:19.291841  604010 cri.go:89] found id: ""
	I1213 11:53:19.291863  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.291872  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:19.291878  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:19.291942  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:19.316863  604010 cri.go:89] found id: ""
	I1213 11:53:19.316890  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.316898  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:19.316904  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:19.316963  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:19.341844  604010 cri.go:89] found id: ""
	I1213 11:53:19.341872  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.341881  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:19.341890  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:19.341901  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:19.397829  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:19.397868  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:19.413720  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:19.413749  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:19.481667  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:19.473280    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.474094    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.475625    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.476130    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.477751    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:19.473280    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.474094    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.475625    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.476130    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.477751    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:19.481694  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:19.481706  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:19.507029  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:19.507069  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:22.036187  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:22.047443  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:22.047516  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:22.073399  604010 cri.go:89] found id: ""
	I1213 11:53:22.073425  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.073433  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:22.073440  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:22.073519  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:22.102458  604010 cri.go:89] found id: ""
	I1213 11:53:22.102483  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.102492  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:22.102499  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:22.102564  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:22.127170  604010 cri.go:89] found id: ""
	I1213 11:53:22.127195  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.127203  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:22.127210  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:22.127270  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:22.152852  604010 cri.go:89] found id: ""
	I1213 11:53:22.152879  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.152887  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:22.152894  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:22.152972  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:22.194915  604010 cri.go:89] found id: ""
	I1213 11:53:22.194939  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.194947  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:22.194985  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:22.195074  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:22.228469  604010 cri.go:89] found id: ""
	I1213 11:53:22.228497  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.228507  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:22.228514  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:22.228574  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:22.257833  604010 cri.go:89] found id: ""
	I1213 11:53:22.257908  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.257931  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:22.257949  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:22.258038  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:22.283351  604010 cri.go:89] found id: ""
	I1213 11:53:22.283375  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.283385  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:22.283394  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:22.283425  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:22.339722  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:22.339759  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:22.358616  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:22.358649  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:22.425578  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:22.417365    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.418082    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.419768    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.420247    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.421786    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:22.417365    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.418082    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.419768    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.420247    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.421786    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:22.425645  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:22.425665  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:22.450867  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:22.450905  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:24.977642  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:24.988556  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:24.988625  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:25.016189  604010 cri.go:89] found id: ""
	I1213 11:53:25.016224  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.016247  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:25.016255  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:25.016320  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:25.044535  604010 cri.go:89] found id: ""
	I1213 11:53:25.044558  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.044567  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:25.044573  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:25.044632  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:25.070715  604010 cri.go:89] found id: ""
	I1213 11:53:25.070743  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.070752  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:25.070759  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:25.070822  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:25.096936  604010 cri.go:89] found id: ""
	I1213 11:53:25.096959  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.096967  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:25.096974  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:25.097035  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:25.122437  604010 cri.go:89] found id: ""
	I1213 11:53:25.122470  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.122480  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:25.122486  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:25.122584  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:25.148962  604010 cri.go:89] found id: ""
	I1213 11:53:25.148988  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.148997  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:25.149003  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:25.149074  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:25.181633  604010 cri.go:89] found id: ""
	I1213 11:53:25.181655  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.181664  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:25.181670  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:25.181732  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:25.212760  604010 cri.go:89] found id: ""
	I1213 11:53:25.212782  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.212790  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:25.212799  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:25.212811  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:25.276581  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:25.268697    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.269118    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.270651    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.271026    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.272496    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:25.268697    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.269118    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.270651    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.271026    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.272496    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:25.276603  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:25.276616  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:25.302726  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:25.302763  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:25.334110  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:25.334183  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:25.390064  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:25.390100  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:26.504848  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:53:26.566930  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:53:26.567035  604010 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 11:53:27.907342  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:27.919244  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:27.919322  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:27.953618  604010 cri.go:89] found id: ""
	I1213 11:53:27.953646  604010 logs.go:282] 0 containers: []
	W1213 11:53:27.953656  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:27.953662  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:27.953732  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:27.983451  604010 cri.go:89] found id: ""
	I1213 11:53:27.983474  604010 logs.go:282] 0 containers: []
	W1213 11:53:27.983483  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:27.983494  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:27.983553  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:28.015089  604010 cri.go:89] found id: ""
	I1213 11:53:28.015124  604010 logs.go:282] 0 containers: []
	W1213 11:53:28.015133  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:28.015141  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:28.015206  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:28.040741  604010 cri.go:89] found id: ""
	I1213 11:53:28.040764  604010 logs.go:282] 0 containers: []
	W1213 11:53:28.040773  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:28.040780  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:28.040847  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:28.066994  604010 cri.go:89] found id: ""
	I1213 11:53:28.067023  604010 logs.go:282] 0 containers: []
	W1213 11:53:28.067032  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:28.067039  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:28.067100  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:28.096788  604010 cri.go:89] found id: ""
	I1213 11:53:28.096819  604010 logs.go:282] 0 containers: []
	W1213 11:53:28.096828  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:28.096835  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:28.096896  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:28.124766  604010 cri.go:89] found id: ""
	I1213 11:53:28.124789  604010 logs.go:282] 0 containers: []
	W1213 11:53:28.124798  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:28.124804  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:28.124873  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:28.159549  604010 cri.go:89] found id: ""
	I1213 11:53:28.159577  604010 logs.go:282] 0 containers: []
	W1213 11:53:28.159585  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:28.159594  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:28.159606  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:28.199573  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:28.199603  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:28.270740  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:28.270789  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:28.287502  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:28.287532  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:28.351364  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:28.343352    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.343924    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.345385    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.345783    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.347266    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:28.343352    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.343924    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.345385    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.345783    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.347266    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:28.351388  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:28.351401  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:30.876922  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:30.887774  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:30.887849  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:30.923850  604010 cri.go:89] found id: ""
	I1213 11:53:30.923878  604010 logs.go:282] 0 containers: []
	W1213 11:53:30.923887  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:30.923893  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:30.923952  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:30.951470  604010 cri.go:89] found id: ""
	I1213 11:53:30.951498  604010 logs.go:282] 0 containers: []
	W1213 11:53:30.951507  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:30.951513  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:30.951570  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:30.984618  604010 cri.go:89] found id: ""
	I1213 11:53:30.984644  604010 logs.go:282] 0 containers: []
	W1213 11:53:30.984653  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:30.984659  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:30.984718  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:31.013958  604010 cri.go:89] found id: ""
	I1213 11:53:31.013986  604010 logs.go:282] 0 containers: []
	W1213 11:53:31.013994  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:31.014001  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:31.014062  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:31.039624  604010 cri.go:89] found id: ""
	I1213 11:53:31.039651  604010 logs.go:282] 0 containers: []
	W1213 11:53:31.039661  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:31.039668  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:31.039735  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:31.065442  604010 cri.go:89] found id: ""
	I1213 11:53:31.065471  604010 logs.go:282] 0 containers: []
	W1213 11:53:31.065480  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:31.065526  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:31.065591  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:31.093987  604010 cri.go:89] found id: ""
	I1213 11:53:31.094012  604010 logs.go:282] 0 containers: []
	W1213 11:53:31.094022  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:31.094028  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:31.094092  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:31.120512  604010 cri.go:89] found id: ""
	I1213 11:53:31.120536  604010 logs.go:282] 0 containers: []
	W1213 11:53:31.120545  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:31.120555  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:31.120568  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:31.193061  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:31.184276    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.185271    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.187099    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.187409    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.188923    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:31.184276    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.185271    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.187099    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.187409    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.188923    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:31.193086  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:31.193099  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:31.222013  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:31.222046  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:31.251352  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:31.251380  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:31.307515  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:31.307558  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:32.782865  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:53:32.843769  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:53:32.843886  604010 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 11:53:33.825081  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:33.836405  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:33.836483  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:33.862074  604010 cri.go:89] found id: ""
	I1213 11:53:33.862097  604010 logs.go:282] 0 containers: []
	W1213 11:53:33.862108  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:33.862114  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:33.862174  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:33.887847  604010 cri.go:89] found id: ""
	I1213 11:53:33.887872  604010 logs.go:282] 0 containers: []
	W1213 11:53:33.887881  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:33.887888  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:33.887953  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:33.922816  604010 cri.go:89] found id: ""
	I1213 11:53:33.922839  604010 logs.go:282] 0 containers: []
	W1213 11:53:33.922847  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:33.922854  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:33.922912  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:33.956255  604010 cri.go:89] found id: ""
	I1213 11:53:33.956278  604010 logs.go:282] 0 containers: []
	W1213 11:53:33.956286  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:33.956296  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:33.956357  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:33.988633  604010 cri.go:89] found id: ""
	I1213 11:53:33.988660  604010 logs.go:282] 0 containers: []
	W1213 11:53:33.988668  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:33.988675  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:33.988734  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:34.016574  604010 cri.go:89] found id: ""
	I1213 11:53:34.016600  604010 logs.go:282] 0 containers: []
	W1213 11:53:34.016610  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:34.016618  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:34.016688  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:34.047246  604010 cri.go:89] found id: ""
	I1213 11:53:34.047274  604010 logs.go:282] 0 containers: []
	W1213 11:53:34.047283  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:34.047290  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:34.047351  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:34.073767  604010 cri.go:89] found id: ""
	I1213 11:53:34.073791  604010 logs.go:282] 0 containers: []
	W1213 11:53:34.073801  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:34.073810  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:34.073821  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:34.142086  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:34.142126  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:34.160135  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:34.160221  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:34.242780  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:34.234520    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.235116    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.236649    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.237063    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.238589    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:34.234520    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.235116    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.236649    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.237063    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.238589    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:34.242803  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:34.242817  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:34.268944  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:34.268981  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:36.800525  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:36.813555  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:36.813631  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:36.838503  604010 cri.go:89] found id: ""
	I1213 11:53:36.838530  604010 logs.go:282] 0 containers: []
	W1213 11:53:36.838539  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:36.838546  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:36.838610  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:36.863532  604010 cri.go:89] found id: ""
	I1213 11:53:36.863553  604010 logs.go:282] 0 containers: []
	W1213 11:53:36.863562  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:36.863569  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:36.863629  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:36.888886  604010 cri.go:89] found id: ""
	I1213 11:53:36.888912  604010 logs.go:282] 0 containers: []
	W1213 11:53:36.888920  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:36.888926  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:36.888992  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:36.917481  604010 cri.go:89] found id: ""
	I1213 11:53:36.917566  604010 logs.go:282] 0 containers: []
	W1213 11:53:36.917589  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:36.917608  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:36.917708  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:36.951605  604010 cri.go:89] found id: ""
	I1213 11:53:36.951676  604010 logs.go:282] 0 containers: []
	W1213 11:53:36.951698  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:36.951716  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:36.951808  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:36.980776  604010 cri.go:89] found id: ""
	I1213 11:53:36.980798  604010 logs.go:282] 0 containers: []
	W1213 11:53:36.980807  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:36.980814  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:36.980878  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:37.014102  604010 cri.go:89] found id: ""
	I1213 11:53:37.014129  604010 logs.go:282] 0 containers: []
	W1213 11:53:37.014139  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:37.014146  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:37.014218  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:37.041045  604010 cri.go:89] found id: ""
	I1213 11:53:37.041068  604010 logs.go:282] 0 containers: []
	W1213 11:53:37.041076  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:37.041086  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:37.041099  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:37.057607  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:37.057677  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:37.123513  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:37.114613    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.115389    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.117143    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.117811    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.119588    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:37.114613    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.115389    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.117143    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.117811    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.119588    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:37.123585  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:37.123612  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:37.149745  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:37.149782  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:37.190123  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:37.190160  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:39.753400  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:39.766329  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:39.766428  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:39.794895  604010 cri.go:89] found id: ""
	I1213 11:53:39.794979  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.794995  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:39.795003  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:39.795077  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:39.819418  604010 cri.go:89] found id: ""
	I1213 11:53:39.819444  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.819453  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:39.819462  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:39.819522  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:39.847949  604010 cri.go:89] found id: ""
	I1213 11:53:39.847976  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.847985  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:39.847992  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:39.848064  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:39.872978  604010 cri.go:89] found id: ""
	I1213 11:53:39.873009  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.873018  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:39.873025  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:39.873091  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:39.900210  604010 cri.go:89] found id: ""
	I1213 11:53:39.900236  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.900245  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:39.900252  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:39.900311  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:39.934251  604010 cri.go:89] found id: ""
	I1213 11:53:39.934276  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.934285  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:39.934291  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:39.934351  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:39.964389  604010 cri.go:89] found id: ""
	I1213 11:53:39.964416  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.964425  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:39.964431  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:39.964496  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:39.995412  604010 cri.go:89] found id: ""
	I1213 11:53:39.995435  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.995444  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:39.995454  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:39.995466  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:40.074600  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:40.074644  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:40.093065  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:40.093143  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:40.162566  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:40.153392    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.154048    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.155849    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.156585    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.158356    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:40.153392    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.154048    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.155849    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.156585    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.158356    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:40.162633  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:40.162659  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:40.191469  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:40.191548  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:42.738325  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:42.749369  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:42.749435  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:42.776660  604010 cri.go:89] found id: ""
	I1213 11:53:42.776686  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.776695  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:42.776701  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:42.776761  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:42.802014  604010 cri.go:89] found id: ""
	I1213 11:53:42.802042  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.802051  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:42.802057  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:42.802116  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:42.826554  604010 cri.go:89] found id: ""
	I1213 11:53:42.826583  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.826592  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:42.826598  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:42.826659  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:42.853269  604010 cri.go:89] found id: ""
	I1213 11:53:42.853296  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.853305  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:42.853319  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:42.853384  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:42.880122  604010 cri.go:89] found id: ""
	I1213 11:53:42.880150  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.880159  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:42.880166  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:42.880227  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:42.904811  604010 cri.go:89] found id: ""
	I1213 11:53:42.904834  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.904843  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:42.904850  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:42.904908  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:42.930715  604010 cri.go:89] found id: ""
	I1213 11:53:42.930744  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.930753  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:42.930759  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:42.930815  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:42.964092  604010 cri.go:89] found id: ""
	I1213 11:53:42.964115  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.964123  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:42.964132  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:42.964144  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:42.994219  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:42.994254  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:43.031007  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:43.031036  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:43.086377  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:43.086412  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:43.103185  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:43.103216  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:43.180526  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:43.171640    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.172414    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.174057    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.174649    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.176278    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:43.171640    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.172414    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.174057    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.174649    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.176278    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:45.681512  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:45.691980  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:45.692050  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:45.720468  604010 cri.go:89] found id: ""
	I1213 11:53:45.720494  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.720503  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:45.720509  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:45.720566  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:45.745270  604010 cri.go:89] found id: ""
	I1213 11:53:45.745297  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.745305  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:45.745312  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:45.745371  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:45.771959  604010 cri.go:89] found id: ""
	I1213 11:53:45.771989  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.771998  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:45.772005  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:45.772063  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:45.797561  604010 cri.go:89] found id: ""
	I1213 11:53:45.797588  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.797597  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:45.797604  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:45.797666  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:45.821937  604010 cri.go:89] found id: ""
	I1213 11:53:45.821965  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.821975  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:45.821981  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:45.822041  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:45.854390  604010 cri.go:89] found id: ""
	I1213 11:53:45.854414  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.854423  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:45.854430  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:45.854489  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:45.879570  604010 cri.go:89] found id: ""
	I1213 11:53:45.879597  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.879616  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:45.879623  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:45.879681  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:45.904307  604010 cri.go:89] found id: ""
	I1213 11:53:45.904335  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.904344  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:45.904354  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:45.904364  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:45.971467  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:45.971554  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:45.988842  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:45.988868  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:46.054484  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:46.046672    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.047076    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.048668    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.049161    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.050614    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:46.046672    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.047076    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.048668    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.049161    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.050614    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:46.054553  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:46.054579  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:46.079997  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:46.080032  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:48.608207  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:48.618848  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:48.618926  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:48.644320  604010 cri.go:89] found id: ""
	I1213 11:53:48.644344  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.644352  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:48.644359  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:48.644420  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:48.669194  604010 cri.go:89] found id: ""
	I1213 11:53:48.669226  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.669236  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:48.669242  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:48.669308  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:48.694072  604010 cri.go:89] found id: ""
	I1213 11:53:48.694097  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.694107  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:48.694113  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:48.694188  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:48.718654  604010 cri.go:89] found id: ""
	I1213 11:53:48.718679  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.718720  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:48.718727  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:48.718800  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:48.742539  604010 cri.go:89] found id: ""
	I1213 11:53:48.742571  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.742580  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:48.742587  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:48.742660  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:48.771087  604010 cri.go:89] found id: ""
	I1213 11:53:48.771111  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.771120  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:48.771126  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:48.771185  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:48.797732  604010 cri.go:89] found id: ""
	I1213 11:53:48.797755  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.797764  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:48.797770  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:48.797834  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:48.822805  604010 cri.go:89] found id: ""
	I1213 11:53:48.822830  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.822839  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:48.822849  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:48.822860  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:48.879446  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:48.879514  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:48.895910  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:48.895938  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:48.987206  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:48.978941    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.979739    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.981488    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.981826    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.983267    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:48.978941    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.979739    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.981488    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.981826    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.983267    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:48.987238  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:48.987251  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:49.014114  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:49.014150  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:50.175475  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:53:50.239481  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:53:50.239579  604010 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 11:53:50.242787  604010 out.go:179] * Enabled addons: 
	I1213 11:53:50.245448  604010 addons.go:530] duration metric: took 1m54.618181483s for enable addons: enabled=[]
	I1213 11:53:51.543477  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:51.554449  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:51.554521  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:51.579307  604010 cri.go:89] found id: ""
	I1213 11:53:51.579335  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.579344  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:51.579350  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:51.579411  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:51.605002  604010 cri.go:89] found id: ""
	I1213 11:53:51.605029  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.605040  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:51.605047  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:51.605108  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:51.629728  604010 cri.go:89] found id: ""
	I1213 11:53:51.629761  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.629770  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:51.629777  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:51.629840  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:51.656823  604010 cri.go:89] found id: ""
	I1213 11:53:51.656846  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.656855  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:51.656862  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:51.656919  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:51.684689  604010 cri.go:89] found id: ""
	I1213 11:53:51.684712  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.684721  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:51.684728  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:51.684787  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:51.709741  604010 cri.go:89] found id: ""
	I1213 11:53:51.709768  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.709776  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:51.709784  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:51.709895  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:51.735821  604010 cri.go:89] found id: ""
	I1213 11:53:51.735848  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.735857  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:51.735863  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:51.735922  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:51.765085  604010 cri.go:89] found id: ""
	I1213 11:53:51.765111  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.765120  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:51.765130  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:51.765143  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:51.820951  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:51.820986  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:51.837298  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:51.837448  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:51.903778  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:51.894875    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.895698    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.897404    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.897825    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.899293    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:51.894875    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.895698    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.897404    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.897825    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.899293    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:51.903855  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:51.903876  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:51.931477  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:51.931561  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:54.461061  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:54.471768  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:54.471839  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:54.497629  604010 cri.go:89] found id: ""
	I1213 11:53:54.497651  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.497660  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:54.497666  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:54.497725  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:54.523805  604010 cri.go:89] found id: ""
	I1213 11:53:54.523830  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.523839  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:54.523846  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:54.523905  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:54.548988  604010 cri.go:89] found id: ""
	I1213 11:53:54.549012  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.549021  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:54.549027  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:54.549089  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:54.584912  604010 cri.go:89] found id: ""
	I1213 11:53:54.584996  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.585012  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:54.585020  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:54.585094  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:54.613768  604010 cri.go:89] found id: ""
	I1213 11:53:54.613810  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.613822  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:54.613832  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:54.613917  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:54.638498  604010 cri.go:89] found id: ""
	I1213 11:53:54.638523  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.638531  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:54.638539  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:54.638597  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:54.663796  604010 cri.go:89] found id: ""
	I1213 11:53:54.663863  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.663886  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:54.663904  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:54.663994  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:54.688512  604010 cri.go:89] found id: ""
	I1213 11:53:54.688595  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.688612  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:54.688623  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:54.688635  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:54.745122  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:54.745158  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:54.761471  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:54.761502  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:54.827485  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:54.818964    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.819562    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.821065    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.821615    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.823257    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:54.818964    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.819562    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.821065    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.821615    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.823257    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:54.827506  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:54.827519  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:54.853348  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:54.853383  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:57.386439  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:57.396996  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:57.397067  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:57.432425  604010 cri.go:89] found id: ""
	I1213 11:53:57.432451  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.432461  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:57.432468  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:57.432531  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:57.468740  604010 cri.go:89] found id: ""
	I1213 11:53:57.468767  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.468777  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:57.468783  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:57.468848  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:57.496008  604010 cri.go:89] found id: ""
	I1213 11:53:57.496032  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.496041  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:57.496053  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:57.496113  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:57.522430  604010 cri.go:89] found id: ""
	I1213 11:53:57.522454  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.522463  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:57.522469  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:57.522528  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:57.547956  604010 cri.go:89] found id: ""
	I1213 11:53:57.547980  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.547988  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:57.547994  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:57.548054  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:57.573554  604010 cri.go:89] found id: ""
	I1213 11:53:57.573579  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.573589  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:57.573596  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:57.573658  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:57.597400  604010 cri.go:89] found id: ""
	I1213 11:53:57.597428  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.597437  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:57.597443  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:57.597501  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:57.621599  604010 cri.go:89] found id: ""
	I1213 11:53:57.621623  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.621632  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:57.621642  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:57.621653  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:57.677116  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:57.677153  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:57.692856  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:57.692929  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:57.758229  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:57.748721    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.749368    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.751042    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.751857    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.753632    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:57.748721    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.749368    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.751042    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.751857    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.753632    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:57.758252  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:57.758266  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:57.784520  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:57.784560  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:00.317292  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:00.352525  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:00.352620  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:00.392603  604010 cri.go:89] found id: ""
	I1213 11:54:00.392636  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.392646  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:00.392654  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:00.392736  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:00.447117  604010 cri.go:89] found id: ""
	I1213 11:54:00.447149  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.447158  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:00.447178  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:00.447281  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:00.479294  604010 cri.go:89] found id: ""
	I1213 11:54:00.479324  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.479333  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:00.479339  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:00.479406  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:00.510064  604010 cri.go:89] found id: ""
	I1213 11:54:00.510092  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.510101  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:00.510108  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:00.510184  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:00.537774  604010 cri.go:89] found id: ""
	I1213 11:54:00.537801  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.537810  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:00.537816  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:00.537877  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:00.563430  604010 cri.go:89] found id: ""
	I1213 11:54:00.563460  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.563469  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:00.563475  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:00.563534  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:00.588470  604010 cri.go:89] found id: ""
	I1213 11:54:00.588495  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.588503  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:00.588510  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:00.588573  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:00.616819  604010 cri.go:89] found id: ""
	I1213 11:54:00.616853  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.616865  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:00.616874  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:00.616887  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:00.632810  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:00.632837  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:00.697200  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:00.688095    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.688902    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.690382    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.690873    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.692718    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:00.688095    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.688902    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.690382    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.690873    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.692718    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:00.697225  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:00.697239  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:00.722351  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:00.722391  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:00.753453  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:00.753489  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:03.309839  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:03.321093  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:03.321163  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:03.349567  604010 cri.go:89] found id: ""
	I1213 11:54:03.349591  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.349600  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:03.349607  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:03.349667  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:03.374734  604010 cri.go:89] found id: ""
	I1213 11:54:03.374758  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.374767  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:03.374774  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:03.374842  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:03.400074  604010 cri.go:89] found id: ""
	I1213 11:54:03.400099  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.400108  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:03.400114  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:03.400172  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:03.461432  604010 cri.go:89] found id: ""
	I1213 11:54:03.461533  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.461561  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:03.461583  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:03.461673  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:03.504466  604010 cri.go:89] found id: ""
	I1213 11:54:03.504544  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.504566  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:03.504585  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:03.504671  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:03.545459  604010 cri.go:89] found id: ""
	I1213 11:54:03.545482  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.545491  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:03.545497  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:03.545575  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:03.570446  604010 cri.go:89] found id: ""
	I1213 11:54:03.570468  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.570476  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:03.570482  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:03.570539  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:03.595001  604010 cri.go:89] found id: ""
	I1213 11:54:03.595023  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.595031  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:03.595041  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:03.595057  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:03.610922  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:03.610955  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:03.679130  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:03.671134    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.671746    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.673204    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.673644    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.675078    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:03.671134    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.671746    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.673204    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.673644    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.675078    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:03.679152  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:03.679167  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:03.705484  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:03.705522  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:03.732753  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:03.732778  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:06.289051  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:06.299935  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:06.300031  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:06.325745  604010 cri.go:89] found id: ""
	I1213 11:54:06.325777  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.325787  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:06.325794  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:06.325898  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:06.352273  604010 cri.go:89] found id: ""
	I1213 11:54:06.352342  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.352357  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:06.352365  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:06.352437  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:06.376413  604010 cri.go:89] found id: ""
	I1213 11:54:06.376482  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.376507  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:06.376520  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:06.376596  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:06.406144  604010 cri.go:89] found id: ""
	I1213 11:54:06.406188  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.406198  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:06.406206  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:06.406285  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:06.456311  604010 cri.go:89] found id: ""
	I1213 11:54:06.456388  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.456411  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:06.456430  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:06.456526  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:06.510060  604010 cri.go:89] found id: ""
	I1213 11:54:06.510150  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.510174  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:06.510194  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:06.510310  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:06.542373  604010 cri.go:89] found id: ""
	I1213 11:54:06.542450  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.542472  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:06.542494  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:06.542601  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:06.567983  604010 cri.go:89] found id: ""
	I1213 11:54:06.568063  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.568087  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:06.568104  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:06.568129  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:06.624463  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:06.624498  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:06.640970  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:06.641003  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:06.714019  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:06.704918    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.705767    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.706758    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.708430    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.708734    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:06.704918    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.705767    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.706758    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.708430    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.708734    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:06.714096  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:06.714117  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:06.739708  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:06.739748  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:09.268501  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:09.279334  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:09.279413  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:09.308998  604010 cri.go:89] found id: ""
	I1213 11:54:09.309034  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.309043  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:09.309050  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:09.309110  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:09.336921  604010 cri.go:89] found id: ""
	I1213 11:54:09.336947  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.336956  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:09.336963  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:09.337025  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:09.367100  604010 cri.go:89] found id: ""
	I1213 11:54:09.367123  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.367131  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:09.367138  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:09.367196  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:09.392881  604010 cri.go:89] found id: ""
	I1213 11:54:09.392913  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.392922  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:09.392930  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:09.392991  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:09.433300  604010 cri.go:89] found id: ""
	I1213 11:54:09.433330  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.433339  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:09.433345  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:09.433408  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:09.499329  604010 cri.go:89] found id: ""
	I1213 11:54:09.499357  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.499365  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:09.499372  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:09.499434  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:09.526348  604010 cri.go:89] found id: ""
	I1213 11:54:09.526383  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.526392  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:09.526399  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:09.526467  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:09.551552  604010 cri.go:89] found id: ""
	I1213 11:54:09.551585  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.551595  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:09.551605  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:09.551617  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:09.607976  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:09.608011  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:09.624198  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:09.624228  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:09.692042  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:09.683184    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.683833    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.685650    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.686276    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.688111    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:09.683184    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.683833    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.685650    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.686276    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.688111    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:09.692065  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:09.692077  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:09.717762  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:09.717799  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:12.251306  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:12.261889  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:12.261958  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:12.286128  604010 cri.go:89] found id: ""
	I1213 11:54:12.286151  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.286160  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:12.286166  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:12.286231  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:12.320955  604010 cri.go:89] found id: ""
	I1213 11:54:12.320982  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.320992  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:12.320999  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:12.321064  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:12.347366  604010 cri.go:89] found id: ""
	I1213 11:54:12.347394  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.347404  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:12.347411  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:12.347475  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:12.372047  604010 cri.go:89] found id: ""
	I1213 11:54:12.372075  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.372084  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:12.372091  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:12.372211  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:12.397441  604010 cri.go:89] found id: ""
	I1213 11:54:12.397466  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.397475  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:12.397482  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:12.397610  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:12.458383  604010 cri.go:89] found id: ""
	I1213 11:54:12.458464  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.458487  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:12.458505  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:12.458610  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:12.499011  604010 cri.go:89] found id: ""
	I1213 11:54:12.499087  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.499110  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:12.499128  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:12.499223  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:12.526019  604010 cri.go:89] found id: ""
	I1213 11:54:12.526048  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.526058  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:12.526068  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:12.526079  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:12.582388  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:12.582425  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:12.598760  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:12.598788  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:12.668226  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:12.659694    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.660116    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.661902    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.662352    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.663961    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:12.659694    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.660116    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.661902    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.662352    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.663961    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:12.668250  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:12.668263  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:12.698476  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:12.698514  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:15.226309  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:15.237066  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:15.237138  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:15.261808  604010 cri.go:89] found id: ""
	I1213 11:54:15.261836  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.261845  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:15.261851  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:15.261912  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:15.286942  604010 cri.go:89] found id: ""
	I1213 11:54:15.286966  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.286975  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:15.286981  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:15.287066  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:15.311813  604010 cri.go:89] found id: ""
	I1213 11:54:15.311842  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.311852  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:15.311859  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:15.311920  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:15.341088  604010 cri.go:89] found id: ""
	I1213 11:54:15.341116  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.341124  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:15.341131  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:15.341188  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:15.365220  604010 cri.go:89] found id: ""
	I1213 11:54:15.365247  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.365256  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:15.365263  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:15.365319  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:15.389056  604010 cri.go:89] found id: ""
	I1213 11:54:15.389084  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.389093  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:15.389099  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:15.389159  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:15.424168  604010 cri.go:89] found id: ""
	I1213 11:54:15.424197  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.424206  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:15.424215  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:15.424275  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:15.458977  604010 cri.go:89] found id: ""
	I1213 11:54:15.459014  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.459023  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:15.459033  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:15.459045  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:15.488624  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:15.488665  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:15.534272  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:15.534300  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:15.593055  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:15.593092  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:15.609340  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:15.609370  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:15.673503  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:15.664722    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.665497    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.667260    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.667958    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.669529    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:15.664722    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.665497    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.667260    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.667958    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.669529    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:18.175202  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:18.185611  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:18.185684  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:18.216571  604010 cri.go:89] found id: ""
	I1213 11:54:18.216598  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.216609  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:18.216616  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:18.216676  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:18.244020  604010 cri.go:89] found id: ""
	I1213 11:54:18.244044  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.244053  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:18.244060  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:18.244125  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:18.269644  604010 cri.go:89] found id: ""
	I1213 11:54:18.269677  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.269686  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:18.269699  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:18.269759  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:18.295049  604010 cri.go:89] found id: ""
	I1213 11:54:18.295074  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.295084  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:18.295092  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:18.295151  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:18.319970  604010 cri.go:89] found id: ""
	I1213 11:54:18.319994  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.320003  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:18.320009  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:18.320068  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:18.348557  604010 cri.go:89] found id: ""
	I1213 11:54:18.348583  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.348591  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:18.348598  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:18.348661  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:18.372733  604010 cri.go:89] found id: ""
	I1213 11:54:18.372759  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.372769  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:18.372775  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:18.372833  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:18.397904  604010 cri.go:89] found id: ""
	I1213 11:54:18.397927  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.397936  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:18.397945  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:18.397958  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:18.475145  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:18.475177  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:18.509115  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:18.509140  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:18.578046  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:18.568558    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.569407    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.571224    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.571849    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.573663    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:18.568558    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.569407    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.571224    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.571849    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.573663    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:18.578069  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:18.578080  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:18.604022  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:18.604057  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:21.135717  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:21.151653  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:21.151722  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:21.181267  604010 cri.go:89] found id: ""
	I1213 11:54:21.181292  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.181300  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:21.181306  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:21.181363  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:21.211036  604010 cri.go:89] found id: ""
	I1213 11:54:21.211064  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.211073  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:21.211079  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:21.211136  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:21.235057  604010 cri.go:89] found id: ""
	I1213 11:54:21.235082  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.235091  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:21.235097  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:21.235158  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:21.259604  604010 cri.go:89] found id: ""
	I1213 11:54:21.259629  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.259637  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:21.259644  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:21.259710  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:21.284921  604010 cri.go:89] found id: ""
	I1213 11:54:21.284948  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.284957  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:21.284963  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:21.285022  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:21.311134  604010 cri.go:89] found id: ""
	I1213 11:54:21.311162  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.311171  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:21.311178  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:21.311238  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:21.337100  604010 cri.go:89] found id: ""
	I1213 11:54:21.337124  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.337133  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:21.337140  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:21.337201  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:21.361945  604010 cri.go:89] found id: ""
	I1213 11:54:21.361969  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.361977  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:21.361987  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:21.362001  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:21.424925  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:21.424964  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:21.442370  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:21.442449  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:21.544421  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:21.527951    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.529143    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.530038    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.535082    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.535420    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:21.527951    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.529143    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.530038    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.535082    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.535420    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:21.544487  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:21.544508  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:21.569861  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:21.569899  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:24.098574  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:24.109255  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:24.109328  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:24.135881  604010 cri.go:89] found id: ""
	I1213 11:54:24.135904  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.135913  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:24.135919  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:24.135976  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:24.160249  604010 cri.go:89] found id: ""
	I1213 11:54:24.160272  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.160281  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:24.160294  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:24.160356  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:24.185097  604010 cri.go:89] found id: ""
	I1213 11:54:24.185120  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.185129  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:24.185136  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:24.185197  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:24.210052  604010 cri.go:89] found id: ""
	I1213 11:54:24.210133  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.210156  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:24.210174  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:24.210263  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:24.234868  604010 cri.go:89] found id: ""
	I1213 11:54:24.234895  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.234905  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:24.234912  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:24.234968  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:24.258998  604010 cri.go:89] found id: ""
	I1213 11:54:24.259023  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.259032  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:24.259039  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:24.259099  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:24.282644  604010 cri.go:89] found id: ""
	I1213 11:54:24.282672  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.282713  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:24.282721  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:24.282780  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:24.312793  604010 cri.go:89] found id: ""
	I1213 11:54:24.312822  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.312831  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:24.312841  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:24.312853  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:24.328614  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:24.328643  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:24.398953  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:24.390748    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.391466    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.392548    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.393304    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.394893    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:24.390748    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.391466    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.392548    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.393304    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.394893    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:24.398978  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:24.398992  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:24.447276  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:24.447353  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:24.512358  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:24.512384  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:27.079756  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:27.090085  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:27.090157  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:27.114934  604010 cri.go:89] found id: ""
	I1213 11:54:27.114957  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.114966  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:27.114972  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:27.115032  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:27.139399  604010 cri.go:89] found id: ""
	I1213 11:54:27.139424  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.139433  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:27.139439  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:27.139496  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:27.164348  604010 cri.go:89] found id: ""
	I1213 11:54:27.164371  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.164379  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:27.164385  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:27.164443  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:27.189263  604010 cri.go:89] found id: ""
	I1213 11:54:27.189286  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.189294  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:27.189302  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:27.189362  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:27.214003  604010 cri.go:89] found id: ""
	I1213 11:54:27.214076  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.214101  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:27.214121  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:27.214204  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:27.238568  604010 cri.go:89] found id: ""
	I1213 11:54:27.238632  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.238657  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:27.238675  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:27.238861  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:27.263827  604010 cri.go:89] found id: ""
	I1213 11:54:27.263850  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.263858  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:27.263864  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:27.263941  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:27.293643  604010 cri.go:89] found id: ""
	I1213 11:54:27.293672  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.293680  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:27.293691  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:27.293706  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:27.353462  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:27.353498  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:27.369639  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:27.369723  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:27.462957  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:27.448639    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.449130    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.455578    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.456379    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.459064    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:27.448639    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.449130    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.455578    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.456379    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.459064    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:27.462984  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:27.463007  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:27.502080  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:27.502115  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:30.033979  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:30.048817  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:30.048921  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:30.086312  604010 cri.go:89] found id: ""
	I1213 11:54:30.086343  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.086353  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:30.086361  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:30.086431  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:30.118027  604010 cri.go:89] found id: ""
	I1213 11:54:30.118056  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.118066  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:30.118073  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:30.118139  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:30.150398  604010 cri.go:89] found id: ""
	I1213 11:54:30.150422  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.150431  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:30.150437  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:30.150501  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:30.176994  604010 cri.go:89] found id: ""
	I1213 11:54:30.177024  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.177033  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:30.177040  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:30.177102  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:30.204667  604010 cri.go:89] found id: ""
	I1213 11:54:30.204692  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.204702  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:30.204709  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:30.204768  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:30.233311  604010 cri.go:89] found id: ""
	I1213 11:54:30.233340  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.233350  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:30.233357  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:30.233443  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:30.258722  604010 cri.go:89] found id: ""
	I1213 11:54:30.258749  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.258759  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:30.258766  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:30.258828  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:30.284738  604010 cri.go:89] found id: ""
	I1213 11:54:30.284766  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.284775  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:30.284785  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:30.284797  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:30.352842  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:30.344108    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.344689    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.346232    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.346735    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.348264    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:30.344108    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.344689    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.346232    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.346735    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.348264    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:30.352861  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:30.352873  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:30.377958  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:30.377993  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:30.409746  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:30.409777  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:30.497989  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:30.498042  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:33.019623  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:33.030945  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:33.031018  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:33.060411  604010 cri.go:89] found id: ""
	I1213 11:54:33.060436  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.060445  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:33.060452  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:33.060514  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:33.085659  604010 cri.go:89] found id: ""
	I1213 11:54:33.085684  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.085693  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:33.085700  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:33.085762  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:33.110577  604010 cri.go:89] found id: ""
	I1213 11:54:33.110603  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.110612  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:33.110618  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:33.110676  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:33.140224  604010 cri.go:89] found id: ""
	I1213 11:54:33.140252  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.140261  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:33.140267  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:33.140328  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:33.165441  604010 cri.go:89] found id: ""
	I1213 11:54:33.165467  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.165477  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:33.165483  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:33.165574  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:33.191299  604010 cri.go:89] found id: ""
	I1213 11:54:33.191324  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.191332  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:33.191339  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:33.191400  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:33.216285  604010 cri.go:89] found id: ""
	I1213 11:54:33.216311  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.216320  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:33.216327  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:33.216386  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:33.241156  604010 cri.go:89] found id: ""
	I1213 11:54:33.241180  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.241189  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:33.241199  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:33.241210  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:33.269984  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:33.270014  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:33.326746  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:33.326782  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:33.343845  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:33.343874  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:33.421478  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:33.403624    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.404936    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.405920    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.407713    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.408279    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:33.403624    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.404936    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.405920    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.407713    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.408279    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:33.421564  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:33.421594  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:35.956688  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:35.967776  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:35.967847  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:35.992715  604010 cri.go:89] found id: ""
	I1213 11:54:35.992745  604010 logs.go:282] 0 containers: []
	W1213 11:54:35.992753  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:35.992760  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:35.992821  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:36.030819  604010 cri.go:89] found id: ""
	I1213 11:54:36.030854  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.030864  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:36.030870  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:36.030940  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:36.056512  604010 cri.go:89] found id: ""
	I1213 11:54:36.056537  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.056547  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:36.056553  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:36.056613  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:36.083355  604010 cri.go:89] found id: ""
	I1213 11:54:36.083381  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.083390  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:36.083397  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:36.083458  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:36.109765  604010 cri.go:89] found id: ""
	I1213 11:54:36.109791  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.109799  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:36.109806  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:36.109866  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:36.139001  604010 cri.go:89] found id: ""
	I1213 11:54:36.139030  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.139040  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:36.139048  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:36.139109  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:36.164252  604010 cri.go:89] found id: ""
	I1213 11:54:36.164280  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.164290  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:36.164297  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:36.164419  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:36.193554  604010 cri.go:89] found id: ""
	I1213 11:54:36.193579  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.193588  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:36.193597  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:36.193609  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:36.225514  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:36.225555  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:36.284505  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:36.284551  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:36.300602  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:36.300632  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:36.368620  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:36.358956    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.360036    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.361784    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.362389    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.364078    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:36.358956    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.360036    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.361784    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.362389    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.364078    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:36.368642  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:36.368654  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:38.894313  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:38.906401  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:38.906478  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:38.931173  604010 cri.go:89] found id: ""
	I1213 11:54:38.931200  604010 logs.go:282] 0 containers: []
	W1213 11:54:38.931210  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:38.931217  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:38.931280  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:38.957289  604010 cri.go:89] found id: ""
	I1213 11:54:38.957315  604010 logs.go:282] 0 containers: []
	W1213 11:54:38.957324  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:38.957330  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:38.957391  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:38.984282  604010 cri.go:89] found id: ""
	I1213 11:54:38.984307  604010 logs.go:282] 0 containers: []
	W1213 11:54:38.984317  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:38.984323  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:38.984402  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:39.012924  604010 cri.go:89] found id: ""
	I1213 11:54:39.012994  604010 logs.go:282] 0 containers: []
	W1213 11:54:39.013012  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:39.013021  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:39.013085  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:39.039025  604010 cri.go:89] found id: ""
	I1213 11:54:39.039062  604010 logs.go:282] 0 containers: []
	W1213 11:54:39.039071  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:39.039077  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:39.039145  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:39.066984  604010 cri.go:89] found id: ""
	I1213 11:54:39.067009  604010 logs.go:282] 0 containers: []
	W1213 11:54:39.067018  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:39.067024  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:39.067088  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:39.093147  604010 cri.go:89] found id: ""
	I1213 11:54:39.093172  604010 logs.go:282] 0 containers: []
	W1213 11:54:39.093181  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:39.093188  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:39.093247  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:39.120841  604010 cri.go:89] found id: ""
	I1213 11:54:39.120866  604010 logs.go:282] 0 containers: []
	W1213 11:54:39.120875  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:39.120884  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:39.120896  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:39.177077  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:39.177113  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:39.193258  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:39.193284  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:39.255506  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:39.246949    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.247600    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.249297    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.249837    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.251408    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:39.246949    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.247600    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.249297    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.249837    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.251408    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:39.255531  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:39.255546  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:39.280959  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:39.280995  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:41.808371  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:41.820751  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:41.820829  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:41.847226  604010 cri.go:89] found id: ""
	I1213 11:54:41.847249  604010 logs.go:282] 0 containers: []
	W1213 11:54:41.847258  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:41.847264  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:41.847322  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:41.873405  604010 cri.go:89] found id: ""
	I1213 11:54:41.873436  604010 logs.go:282] 0 containers: []
	W1213 11:54:41.873448  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:41.873455  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:41.873519  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:41.899479  604010 cri.go:89] found id: ""
	I1213 11:54:41.899509  604010 logs.go:282] 0 containers: []
	W1213 11:54:41.899518  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:41.899524  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:41.899582  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:41.923515  604010 cri.go:89] found id: ""
	I1213 11:54:41.923545  604010 logs.go:282] 0 containers: []
	W1213 11:54:41.923554  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:41.923561  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:41.923621  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:41.952086  604010 cri.go:89] found id: ""
	I1213 11:54:41.952110  604010 logs.go:282] 0 containers: []
	W1213 11:54:41.952119  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:41.952125  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:41.952182  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:41.976613  604010 cri.go:89] found id: ""
	I1213 11:54:41.976637  604010 logs.go:282] 0 containers: []
	W1213 11:54:41.976646  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:41.976653  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:41.976714  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:42.010402  604010 cri.go:89] found id: ""
	I1213 11:54:42.010434  604010 logs.go:282] 0 containers: []
	W1213 11:54:42.010443  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:42.010450  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:42.010520  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:42.038928  604010 cri.go:89] found id: ""
	I1213 11:54:42.038955  604010 logs.go:282] 0 containers: []
	W1213 11:54:42.038964  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:42.038974  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:42.038985  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:42.096963  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:42.097004  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:42.115172  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:42.115213  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:42.192959  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:42.182320    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.183391    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.184373    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.186141    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.186781    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:42.182320    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.183391    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.184373    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.186141    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.186781    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:42.192981  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:42.192995  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:42.219986  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:42.220023  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:44.750998  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:44.761521  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:44.761601  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:44.785581  604010 cri.go:89] found id: ""
	I1213 11:54:44.785609  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.785618  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:44.785625  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:44.785681  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:44.810312  604010 cri.go:89] found id: ""
	I1213 11:54:44.810340  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.810349  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:44.810356  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:44.810419  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:44.834980  604010 cri.go:89] found id: ""
	I1213 11:54:44.835004  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.835012  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:44.835018  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:44.835082  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:44.868160  604010 cri.go:89] found id: ""
	I1213 11:54:44.868187  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.868196  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:44.868203  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:44.868263  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:44.893689  604010 cri.go:89] found id: ""
	I1213 11:54:44.893715  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.893723  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:44.893730  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:44.893788  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:44.918090  604010 cri.go:89] found id: ""
	I1213 11:54:44.918119  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.918128  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:44.918135  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:44.918196  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:44.944994  604010 cri.go:89] found id: ""
	I1213 11:54:44.945022  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.945032  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:44.945038  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:44.945102  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:44.969862  604010 cri.go:89] found id: ""
	I1213 11:54:44.969891  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.969900  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:44.969910  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:44.969921  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:45.027468  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:45.027521  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:45.054117  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:45.054213  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:45.178092  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:45.159739    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.160529    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.166319    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.166867    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.169009    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:45.159739    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.160529    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.166319    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.166867    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.169009    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:45.178126  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:45.178168  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:45.209407  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:45.209462  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:47.757891  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:47.768440  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:47.768511  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:47.797232  604010 cri.go:89] found id: ""
	I1213 11:54:47.797258  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.797267  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:47.797274  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:47.797331  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:47.822035  604010 cri.go:89] found id: ""
	I1213 11:54:47.822059  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.822068  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:47.822074  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:47.822139  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:47.850594  604010 cri.go:89] found id: ""
	I1213 11:54:47.850619  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.850627  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:47.850634  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:47.850715  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:47.875934  604010 cri.go:89] found id: ""
	I1213 11:54:47.875958  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.875967  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:47.875975  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:47.876036  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:47.904019  604010 cri.go:89] found id: ""
	I1213 11:54:47.904043  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.904051  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:47.904058  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:47.904122  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:47.928717  604010 cri.go:89] found id: ""
	I1213 11:54:47.928743  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.928751  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:47.928758  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:47.928818  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:47.953107  604010 cri.go:89] found id: ""
	I1213 11:54:47.953135  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.953144  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:47.953152  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:47.953228  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:47.977855  604010 cri.go:89] found id: ""
	I1213 11:54:47.977891  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.977900  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:47.977910  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:47.977940  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:48.033045  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:48.033085  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:48.049516  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:48.049571  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:48.119802  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:48.111384    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.112145    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.113839    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.114220    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.115737    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:48.111384    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.112145    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.113839    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.114220    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.115737    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:48.119824  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:48.119837  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:48.144575  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:48.144606  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:50.674890  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:50.689012  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:50.689130  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:50.747025  604010 cri.go:89] found id: ""
	I1213 11:54:50.747102  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.747125  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:50.747143  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:50.747232  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:50.775729  604010 cri.go:89] found id: ""
	I1213 11:54:50.775795  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.775812  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:50.775820  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:50.775887  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:50.799251  604010 cri.go:89] found id: ""
	I1213 11:54:50.799277  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.799286  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:50.799292  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:50.799380  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:50.822964  604010 cri.go:89] found id: ""
	I1213 11:54:50.823033  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.823047  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:50.823054  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:50.823125  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:50.851245  604010 cri.go:89] found id: ""
	I1213 11:54:50.851270  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.851279  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:50.851285  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:50.851346  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:50.877382  604010 cri.go:89] found id: ""
	I1213 11:54:50.877405  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.877414  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:50.877420  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:50.877478  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:50.903657  604010 cri.go:89] found id: ""
	I1213 11:54:50.903681  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.903690  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:50.903696  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:50.903754  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:50.931954  604010 cri.go:89] found id: ""
	I1213 11:54:50.931977  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.931992  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:50.932002  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:50.932016  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:50.988153  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:50.988188  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:51.004868  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:51.004912  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:51.078536  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:51.069572    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.070163    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.071963    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.072503    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.074005    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:51.069572    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.070163    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.071963    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.072503    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.074005    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:51.078558  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:51.078571  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:51.105933  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:51.105979  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:53.638010  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:53.648726  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:53.648799  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:53.692658  604010 cri.go:89] found id: ""
	I1213 11:54:53.692685  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.692693  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:53.692700  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:53.692760  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:53.728295  604010 cri.go:89] found id: ""
	I1213 11:54:53.728326  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.728335  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:53.728343  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:53.728402  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:53.768548  604010 cri.go:89] found id: ""
	I1213 11:54:53.768576  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.768585  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:53.768591  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:53.768649  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:53.808130  604010 cri.go:89] found id: ""
	I1213 11:54:53.808152  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.808161  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:53.808167  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:53.808231  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:53.832811  604010 cri.go:89] found id: ""
	I1213 11:54:53.832839  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.832849  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:53.832856  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:53.832916  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:53.857746  604010 cri.go:89] found id: ""
	I1213 11:54:53.857770  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.857778  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:53.857785  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:53.857844  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:53.881722  604010 cri.go:89] found id: ""
	I1213 11:54:53.881747  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.881756  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:53.881763  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:53.881830  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:53.907820  604010 cri.go:89] found id: ""
	I1213 11:54:53.907844  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.907854  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:53.907864  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:53.907877  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:53.963717  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:53.963753  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:53.979615  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:53.979645  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:54.065903  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:54.056577    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.057248    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.058603    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.059235    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.061166    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:54.056577    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.057248    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.058603    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.059235    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.061166    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:54.065924  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:54.065938  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:54.091653  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:54.091689  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:56.621960  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:56.633738  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:56.633810  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:56.692820  604010 cri.go:89] found id: ""
	I1213 11:54:56.692846  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.692856  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:56.692863  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:56.692924  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:56.758799  604010 cri.go:89] found id: ""
	I1213 11:54:56.758842  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.758870  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:56.758884  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:56.758978  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:56.784490  604010 cri.go:89] found id: ""
	I1213 11:54:56.784516  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.784525  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:56.784532  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:56.784593  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:56.808898  604010 cri.go:89] found id: ""
	I1213 11:54:56.808919  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.808928  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:56.808940  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:56.808998  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:56.833308  604010 cri.go:89] found id: ""
	I1213 11:54:56.833373  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.833398  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:56.833416  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:56.833489  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:56.862468  604010 cri.go:89] found id: ""
	I1213 11:54:56.862543  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.862568  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:56.862588  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:56.862678  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:56.891924  604010 cri.go:89] found id: ""
	I1213 11:54:56.891952  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.891962  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:56.891969  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:56.892033  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:56.916269  604010 cri.go:89] found id: ""
	I1213 11:54:56.916296  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.916306  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:56.916315  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:56.916327  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:56.980544  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:56.971761    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.972786    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.974371    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.974958    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.976490    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:56.971761    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.972786    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.974371    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.974958    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.976490    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:56.980565  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:56.980579  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:57.005423  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:57.005460  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:57.032993  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:57.033071  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:57.088966  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:57.089003  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:59.606260  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:59.617007  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:59.617079  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:59.644389  604010 cri.go:89] found id: ""
	I1213 11:54:59.644411  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.644420  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:59.644427  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:59.644484  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:59.689247  604010 cri.go:89] found id: ""
	I1213 11:54:59.689273  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.689282  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:59.689289  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:59.689348  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:59.729540  604010 cri.go:89] found id: ""
	I1213 11:54:59.729582  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.729591  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:59.729597  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:59.729658  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:59.759256  604010 cri.go:89] found id: ""
	I1213 11:54:59.759286  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.759295  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:59.759301  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:59.759362  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:59.788748  604010 cri.go:89] found id: ""
	I1213 11:54:59.788772  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.788780  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:59.788787  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:59.788846  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:59.817278  604010 cri.go:89] found id: ""
	I1213 11:54:59.817313  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.817322  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:59.817328  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:59.817389  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:59.842756  604010 cri.go:89] found id: ""
	I1213 11:54:59.842780  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.842788  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:59.842794  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:59.842862  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:59.868412  604010 cri.go:89] found id: ""
	I1213 11:54:59.868435  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.868443  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:59.868453  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:59.868464  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:59.924773  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:59.924808  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:59.940672  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:59.940704  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:00.041026  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:00.001683    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.002326    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.007036    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.009108    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.010359    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:00.001683    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.002326    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.007036    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.009108    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.010359    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:00.045695  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:00.045733  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:00.200188  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:00.200291  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:02.798329  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:02.808984  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:02.809067  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:02.836650  604010 cri.go:89] found id: ""
	I1213 11:55:02.836675  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.836684  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:02.836692  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:02.836755  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:02.861812  604010 cri.go:89] found id: ""
	I1213 11:55:02.861837  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.861846  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:02.861853  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:02.861915  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:02.892956  604010 cri.go:89] found id: ""
	I1213 11:55:02.892982  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.892992  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:02.892999  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:02.893061  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:02.921418  604010 cri.go:89] found id: ""
	I1213 11:55:02.921444  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.921454  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:02.921460  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:02.921517  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:02.945971  604010 cri.go:89] found id: ""
	I1213 11:55:02.945998  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.946007  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:02.946013  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:02.946071  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:02.971224  604010 cri.go:89] found id: ""
	I1213 11:55:02.971249  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.971258  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:02.971264  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:02.971322  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:02.996070  604010 cri.go:89] found id: ""
	I1213 11:55:02.996098  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.996107  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:02.996113  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:02.996175  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:03.026595  604010 cri.go:89] found id: ""
	I1213 11:55:03.026628  604010 logs.go:282] 0 containers: []
	W1213 11:55:03.026637  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:03.026647  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:03.026662  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:03.083030  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:03.083068  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:03.099216  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:03.099247  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:03.164245  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:03.155657    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.156486    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.158171    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.158870    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.160386    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:03.155657    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.156486    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.158171    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.158870    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.160386    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:03.164269  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:03.164287  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:03.190063  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:03.190105  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:05.717488  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:05.729517  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:05.729651  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:05.754839  604010 cri.go:89] found id: ""
	I1213 11:55:05.754862  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.754870  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:05.754877  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:05.754935  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:05.779444  604010 cri.go:89] found id: ""
	I1213 11:55:05.779470  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.779478  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:05.779486  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:05.779546  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:05.804435  604010 cri.go:89] found id: ""
	I1213 11:55:05.804460  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.804468  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:05.804475  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:05.804536  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:05.828365  604010 cri.go:89] found id: ""
	I1213 11:55:05.828431  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.828454  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:05.828473  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:05.828538  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:05.853088  604010 cri.go:89] found id: ""
	I1213 11:55:05.853114  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.853123  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:05.853129  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:05.853187  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:05.881265  604010 cri.go:89] found id: ""
	I1213 11:55:05.881288  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.881297  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:05.881303  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:05.881363  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:05.907771  604010 cri.go:89] found id: ""
	I1213 11:55:05.907795  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.907804  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:05.907811  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:05.907881  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:05.932155  604010 cri.go:89] found id: ""
	I1213 11:55:05.932181  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.932189  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:05.932199  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:05.932211  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:05.960440  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:05.960467  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:06.018319  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:06.018357  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:06.034573  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:06.034602  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:06.099936  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:06.091153    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.091939    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.093705    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.094323    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.095974    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:06.091153    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.091939    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.093705    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.094323    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.095974    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:06.099962  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:06.099975  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:08.626581  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:08.637490  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:08.637574  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:08.674556  604010 cri.go:89] found id: ""
	I1213 11:55:08.674581  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.674589  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:08.674598  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:08.674659  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:08.719063  604010 cri.go:89] found id: ""
	I1213 11:55:08.719087  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.719095  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:08.719101  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:08.719166  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:08.761839  604010 cri.go:89] found id: ""
	I1213 11:55:08.761863  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.761872  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:08.761878  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:08.761939  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:08.793242  604010 cri.go:89] found id: ""
	I1213 11:55:08.793266  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.793274  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:08.793281  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:08.793338  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:08.823380  604010 cri.go:89] found id: ""
	I1213 11:55:08.823406  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.823416  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:08.823424  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:08.823488  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:08.849669  604010 cri.go:89] found id: ""
	I1213 11:55:08.849696  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.849705  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:08.849712  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:08.849773  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:08.876618  604010 cri.go:89] found id: ""
	I1213 11:55:08.876684  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.876707  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:08.876726  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:08.876807  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:08.902762  604010 cri.go:89] found id: ""
	I1213 11:55:08.902802  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.902811  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:08.902820  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:08.902833  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:08.918880  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:08.918910  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:08.990155  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:08.981658    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.982141    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.984095    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.984454    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.986001    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:08.981658    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.982141    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.984095    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.984454    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.986001    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:08.990182  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:08.990196  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:09.017239  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:09.017278  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:09.049754  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:09.049785  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:11.607272  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:11.617804  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:11.617876  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:11.646336  604010 cri.go:89] found id: ""
	I1213 11:55:11.646359  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.646368  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:11.646374  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:11.646434  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:11.684464  604010 cri.go:89] found id: ""
	I1213 11:55:11.684490  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.684499  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:11.684505  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:11.684566  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:11.724793  604010 cri.go:89] found id: ""
	I1213 11:55:11.724816  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.724824  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:11.724831  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:11.724890  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:11.760776  604010 cri.go:89] found id: ""
	I1213 11:55:11.760799  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.760807  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:11.760814  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:11.760873  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:11.787122  604010 cri.go:89] found id: ""
	I1213 11:55:11.787195  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.787217  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:11.787237  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:11.787333  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:11.812257  604010 cri.go:89] found id: ""
	I1213 11:55:11.812283  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.812291  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:11.812298  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:11.812359  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:11.837304  604010 cri.go:89] found id: ""
	I1213 11:55:11.837341  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.837350  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:11.837356  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:11.837427  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:11.861726  604010 cri.go:89] found id: ""
	I1213 11:55:11.861759  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.861768  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:11.861778  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:11.861792  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:11.918248  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:11.918285  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:11.934535  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:11.934571  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:12.005308  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:11.993379    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.994149    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.995831    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.996328    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.998145    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:11.993379    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.994149    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.995831    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.996328    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.998145    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:12.005338  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:12.005351  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:12.031381  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:12.031415  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:14.558358  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:14.569230  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:14.569297  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:14.594108  604010 cri.go:89] found id: ""
	I1213 11:55:14.594186  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.594209  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:14.594231  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:14.594306  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:14.617763  604010 cri.go:89] found id: ""
	I1213 11:55:14.617784  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.617818  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:14.617824  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:14.617882  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:14.641477  604010 cri.go:89] found id: ""
	I1213 11:55:14.641499  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.641508  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:14.641514  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:14.641580  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:14.706320  604010 cri.go:89] found id: ""
	I1213 11:55:14.706395  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.706419  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:14.706438  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:14.706530  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:14.750579  604010 cri.go:89] found id: ""
	I1213 11:55:14.750602  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.750611  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:14.750617  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:14.750738  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:14.777264  604010 cri.go:89] found id: ""
	I1213 11:55:14.777299  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.777308  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:14.777321  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:14.777392  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:14.801675  604010 cri.go:89] found id: ""
	I1213 11:55:14.801750  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.801775  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:14.801794  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:14.801878  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:14.826273  604010 cri.go:89] found id: ""
	I1213 11:55:14.826308  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.826317  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:14.826327  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:14.826341  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:14.852456  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:14.852492  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:14.880309  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:14.880337  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:14.935692  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:14.935727  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:14.952137  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:14.952167  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:15.033989  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:15.011900    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.014560    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.015092    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.017168    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.018209    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:15.011900    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.014560    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.015092    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.017168    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.018209    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:17.535599  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:17.547401  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:17.547477  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:17.573160  604010 cri.go:89] found id: ""
	I1213 11:55:17.573190  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.573199  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:17.573206  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:17.573269  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:17.602638  604010 cri.go:89] found id: ""
	I1213 11:55:17.602664  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.602673  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:17.602679  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:17.602761  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:17.628217  604010 cri.go:89] found id: ""
	I1213 11:55:17.628242  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.628251  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:17.628258  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:17.628321  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:17.653857  604010 cri.go:89] found id: ""
	I1213 11:55:17.653923  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.653934  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:17.653941  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:17.654004  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:17.730131  604010 cri.go:89] found id: ""
	I1213 11:55:17.730166  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.730175  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:17.730211  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:17.730290  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:17.764018  604010 cri.go:89] found id: ""
	I1213 11:55:17.764045  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.764053  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:17.764060  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:17.764139  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:17.789006  604010 cri.go:89] found id: ""
	I1213 11:55:17.789029  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.789039  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:17.789045  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:17.789110  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:17.820038  604010 cri.go:89] found id: ""
	I1213 11:55:17.820061  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.820070  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:17.820080  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:17.820091  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:17.845672  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:17.845708  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:17.876520  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:17.876549  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:17.934113  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:17.934148  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:17.950852  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:17.950884  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:18.024225  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:18.014810    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.015320    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.017184    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.017872    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.019543    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:18.014810    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.015320    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.017184    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.017872    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.019543    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:20.526091  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:20.539006  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:20.539072  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:20.568228  604010 cri.go:89] found id: ""
	I1213 11:55:20.568252  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.568260  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:20.568266  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:20.568341  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:20.595603  604010 cri.go:89] found id: ""
	I1213 11:55:20.595632  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.595642  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:20.595648  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:20.595710  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:20.619697  604010 cri.go:89] found id: ""
	I1213 11:55:20.619723  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.619732  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:20.619739  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:20.619801  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:20.644480  604010 cri.go:89] found id: ""
	I1213 11:55:20.644507  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.644516  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:20.644523  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:20.644605  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:20.707263  604010 cri.go:89] found id: ""
	I1213 11:55:20.707286  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.707295  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:20.707301  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:20.707362  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:20.753734  604010 cri.go:89] found id: ""
	I1213 11:55:20.753758  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.753767  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:20.753773  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:20.753832  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:20.779244  604010 cri.go:89] found id: ""
	I1213 11:55:20.779267  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.779275  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:20.779282  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:20.779342  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:20.808050  604010 cri.go:89] found id: ""
	I1213 11:55:20.808127  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.808144  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:20.808155  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:20.808167  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:20.863714  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:20.863751  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:20.879958  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:20.879988  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:20.947629  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:20.938365    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.939048    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.940693    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.941317    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.943088    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:20.938365    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.939048    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.940693    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.941317    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.943088    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:20.947653  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:20.947668  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:20.972884  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:20.972921  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:23.506189  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:23.517150  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:23.517220  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:23.544888  604010 cri.go:89] found id: ""
	I1213 11:55:23.544912  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.544920  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:23.544927  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:23.544992  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:23.571162  604010 cri.go:89] found id: ""
	I1213 11:55:23.571189  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.571197  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:23.571204  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:23.571288  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:23.596593  604010 cri.go:89] found id: ""
	I1213 11:55:23.596618  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.596626  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:23.596633  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:23.596693  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:23.622396  604010 cri.go:89] found id: ""
	I1213 11:55:23.622424  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.622433  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:23.622439  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:23.622541  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:23.648441  604010 cri.go:89] found id: ""
	I1213 11:55:23.648468  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.648478  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:23.648484  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:23.648552  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:23.698559  604010 cri.go:89] found id: ""
	I1213 11:55:23.698586  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.698595  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:23.698601  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:23.698664  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:23.749855  604010 cri.go:89] found id: ""
	I1213 11:55:23.749883  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.749893  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:23.749905  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:23.749964  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:23.781499  604010 cri.go:89] found id: ""
	I1213 11:55:23.781527  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.781536  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:23.781547  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:23.781571  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:23.815145  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:23.815174  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:23.871093  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:23.871128  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:23.887427  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:23.887455  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:23.956327  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:23.948085    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.948683    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.950286    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.950824    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.952300    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:23.948085    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.948683    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.950286    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.950824    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.952300    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:23.956396  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:23.956417  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:26.482024  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:26.492511  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:26.492582  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:26.517699  604010 cri.go:89] found id: ""
	I1213 11:55:26.517777  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.517800  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:26.517818  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:26.517906  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:26.545138  604010 cri.go:89] found id: ""
	I1213 11:55:26.545207  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.545233  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:26.545251  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:26.545341  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:26.570019  604010 cri.go:89] found id: ""
	I1213 11:55:26.570090  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.570116  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:26.570134  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:26.570226  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:26.596752  604010 cri.go:89] found id: ""
	I1213 11:55:26.596831  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.596854  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:26.596869  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:26.596946  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:26.625280  604010 cri.go:89] found id: ""
	I1213 11:55:26.625306  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.625315  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:26.625322  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:26.625379  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:26.655489  604010 cri.go:89] found id: ""
	I1213 11:55:26.655513  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.655522  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:26.655528  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:26.655594  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:26.688001  604010 cri.go:89] found id: ""
	I1213 11:55:26.688028  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.688037  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:26.688043  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:26.688103  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:26.720200  604010 cri.go:89] found id: ""
	I1213 11:55:26.720226  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.720235  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:26.720244  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:26.720255  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:26.751334  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:26.751368  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:26.791793  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:26.791819  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:26.847456  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:26.847493  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:26.864079  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:26.864109  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:26.927248  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:26.919337    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.920135    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.921687    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.921990    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.923429    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:26.919337    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.920135    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.921687    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.921990    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.923429    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:29.427521  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:29.438225  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:29.438297  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:29.463111  604010 cri.go:89] found id: ""
	I1213 11:55:29.463137  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.463146  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:29.463154  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:29.463222  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:29.488474  604010 cri.go:89] found id: ""
	I1213 11:55:29.488504  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.488513  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:29.488519  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:29.488580  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:29.514792  604010 cri.go:89] found id: ""
	I1213 11:55:29.514815  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.514824  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:29.514830  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:29.514890  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:29.540502  604010 cri.go:89] found id: ""
	I1213 11:55:29.540528  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.540537  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:29.540544  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:29.540623  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:29.569010  604010 cri.go:89] found id: ""
	I1213 11:55:29.569035  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.569044  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:29.569050  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:29.569143  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:29.597354  604010 cri.go:89] found id: ""
	I1213 11:55:29.597381  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.597390  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:29.597396  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:29.597482  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:29.622205  604010 cri.go:89] found id: ""
	I1213 11:55:29.622230  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.622239  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:29.622245  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:29.622321  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:29.649830  604010 cri.go:89] found id: ""
	I1213 11:55:29.649856  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.649865  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:29.649874  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:29.649914  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:29.717017  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:29.717058  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:29.745372  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:29.745398  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:29.821563  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:29.813336    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.813962    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.815565    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.815994    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.817582    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:29.813336    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.813962    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.815565    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.815994    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.817582    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:29.821589  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:29.821603  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:29.847167  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:29.847206  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:32.379999  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:32.394044  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:32.394117  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:32.419725  604010 cri.go:89] found id: ""
	I1213 11:55:32.419751  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.419759  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:32.419767  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:32.419827  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:32.448514  604010 cri.go:89] found id: ""
	I1213 11:55:32.448537  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.448546  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:32.448552  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:32.448614  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:32.474220  604010 cri.go:89] found id: ""
	I1213 11:55:32.474257  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.474266  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:32.474272  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:32.474331  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:32.501945  604010 cri.go:89] found id: ""
	I1213 11:55:32.501970  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.501980  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:32.501987  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:32.502051  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:32.529117  604010 cri.go:89] found id: ""
	I1213 11:55:32.529143  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.529151  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:32.529159  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:32.529220  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:32.558516  604010 cri.go:89] found id: ""
	I1213 11:55:32.558545  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.558554  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:32.558563  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:32.558624  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:32.584351  604010 cri.go:89] found id: ""
	I1213 11:55:32.584375  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.584383  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:32.584390  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:32.584459  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:32.610180  604010 cri.go:89] found id: ""
	I1213 11:55:32.610203  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.610212  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:32.610222  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:32.610233  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:32.668609  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:32.668647  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:32.687093  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:32.687199  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:32.806632  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:32.798550    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.799065    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.800667    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.801088    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.802806    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:32.798550    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.799065    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.800667    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.801088    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.802806    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:32.806658  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:32.806670  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:32.832549  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:32.832585  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:35.361963  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:35.372809  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:35.372881  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:35.398138  604010 cri.go:89] found id: ""
	I1213 11:55:35.398164  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.398172  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:35.398178  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:35.398238  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:35.423828  604010 cri.go:89] found id: ""
	I1213 11:55:35.423854  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.423863  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:35.423870  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:35.423934  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:35.453483  604010 cri.go:89] found id: ""
	I1213 11:55:35.453508  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.453518  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:35.453524  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:35.453617  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:35.478270  604010 cri.go:89] found id: ""
	I1213 11:55:35.478294  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.478303  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:35.478310  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:35.478373  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:35.508196  604010 cri.go:89] found id: ""
	I1213 11:55:35.508226  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.508235  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:35.508242  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:35.508327  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:35.537327  604010 cri.go:89] found id: ""
	I1213 11:55:35.537359  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.537369  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:35.537401  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:35.537490  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:35.564387  604010 cri.go:89] found id: ""
	I1213 11:55:35.564412  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.564420  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:35.564427  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:35.564483  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:35.589741  604010 cri.go:89] found id: ""
	I1213 11:55:35.589766  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.589776  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:35.589787  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:35.589798  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:35.645240  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:35.645275  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:35.672440  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:35.672532  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:35.779839  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:35.770429    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.771175    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.772996    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.773416    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.775177    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:35.770429    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.771175    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.772996    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.773416    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.775177    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:35.779861  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:35.779874  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:35.804945  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:35.804983  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:38.336379  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:38.347209  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:38.347278  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:38.372679  604010 cri.go:89] found id: ""
	I1213 11:55:38.372706  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.372716  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:38.372723  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:38.372781  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:38.401308  604010 cri.go:89] found id: ""
	I1213 11:55:38.401340  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.401354  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:38.401360  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:38.401428  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:38.425990  604010 cri.go:89] found id: ""
	I1213 11:55:38.426025  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.426034  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:38.426040  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:38.426097  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:38.452858  604010 cri.go:89] found id: ""
	I1213 11:55:38.452884  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.452892  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:38.452900  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:38.452958  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:38.477766  604010 cri.go:89] found id: ""
	I1213 11:55:38.477791  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.477800  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:38.477807  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:38.477876  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:38.503003  604010 cri.go:89] found id: ""
	I1213 11:55:38.503028  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.503037  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:38.503043  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:38.503110  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:38.532923  604010 cri.go:89] found id: ""
	I1213 11:55:38.532946  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.532955  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:38.532962  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:38.533021  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:38.561367  604010 cri.go:89] found id: ""
	I1213 11:55:38.561389  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.561397  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:38.561406  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:38.561425  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:38.627276  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:38.618551    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.619310    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.621183    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.621748    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.623328    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:38.618551    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.619310    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.621183    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.621748    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.623328    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:38.627341  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:38.627361  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:38.652980  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:38.653021  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:38.702202  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:38.702236  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:38.775658  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:38.775742  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:41.293324  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:41.304911  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:41.304988  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:41.329954  604010 cri.go:89] found id: ""
	I1213 11:55:41.329981  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.329990  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:41.329997  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:41.330068  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:41.356810  604010 cri.go:89] found id: ""
	I1213 11:55:41.356835  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.356845  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:41.356851  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:41.356911  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:41.382782  604010 cri.go:89] found id: ""
	I1213 11:55:41.382807  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.382816  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:41.382823  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:41.382882  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:41.411145  604010 cri.go:89] found id: ""
	I1213 11:55:41.411170  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.411179  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:41.411186  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:41.411242  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:41.439686  604010 cri.go:89] found id: ""
	I1213 11:55:41.439713  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.439722  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:41.439729  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:41.439797  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:41.463861  604010 cri.go:89] found id: ""
	I1213 11:55:41.463884  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.463893  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:41.463900  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:41.463958  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:41.488219  604010 cri.go:89] found id: ""
	I1213 11:55:41.488243  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.488252  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:41.488258  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:41.488339  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:41.513569  604010 cri.go:89] found id: ""
	I1213 11:55:41.513600  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.513609  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:41.513619  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:41.513656  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:41.570549  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:41.570585  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:41.587559  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:41.587588  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:41.654460  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:41.646598    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.647136    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.648610    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.649143    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.650745    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:41.646598    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.647136    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.648610    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.649143    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.650745    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:41.654481  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:41.654494  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:41.679884  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:41.679918  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:44.238824  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:44.249658  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:44.249735  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:44.274262  604010 cri.go:89] found id: ""
	I1213 11:55:44.274287  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.274297  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:44.274303  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:44.274365  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:44.298725  604010 cri.go:89] found id: ""
	I1213 11:55:44.298750  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.298759  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:44.298765  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:44.298831  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:44.332989  604010 cri.go:89] found id: ""
	I1213 11:55:44.333019  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.333028  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:44.333035  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:44.333095  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:44.358205  604010 cri.go:89] found id: ""
	I1213 11:55:44.358229  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.358238  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:44.358250  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:44.358313  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:44.383989  604010 cri.go:89] found id: ""
	I1213 11:55:44.384017  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.384027  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:44.384034  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:44.384099  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:44.409651  604010 cri.go:89] found id: ""
	I1213 11:55:44.409677  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.409686  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:44.409692  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:44.409751  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:44.435253  604010 cri.go:89] found id: ""
	I1213 11:55:44.435280  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.435288  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:44.435295  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:44.435354  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:44.459342  604010 cri.go:89] found id: ""
	I1213 11:55:44.459379  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.459388  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:44.459398  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:44.459409  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:44.527760  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:44.518804    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.519537    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.521331    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.521838    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.523375    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:44.518804    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.519537    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.521331    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.521838    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.523375    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:44.527781  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:44.527793  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:44.554052  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:44.554086  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:44.583553  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:44.583582  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:44.639690  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:44.639723  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:47.156860  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:47.167658  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:47.167728  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:47.191689  604010 cri.go:89] found id: ""
	I1213 11:55:47.191714  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.191723  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:47.191730  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:47.191790  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:47.217625  604010 cri.go:89] found id: ""
	I1213 11:55:47.217652  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.217665  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:47.217679  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:47.217756  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:47.246057  604010 cri.go:89] found id: ""
	I1213 11:55:47.246080  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.246088  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:47.246094  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:47.246153  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:47.272649  604010 cri.go:89] found id: ""
	I1213 11:55:47.272673  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.272682  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:47.272688  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:47.272747  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:47.297156  604010 cri.go:89] found id: ""
	I1213 11:55:47.297178  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.297186  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:47.297192  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:47.297249  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:47.321533  604010 cri.go:89] found id: ""
	I1213 11:55:47.321555  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.321563  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:47.321570  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:47.321647  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:47.347526  604010 cri.go:89] found id: ""
	I1213 11:55:47.347548  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.347558  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:47.347566  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:47.347743  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:47.373360  604010 cri.go:89] found id: ""
	I1213 11:55:47.373437  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.373466  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:47.373491  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:47.373544  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:47.406388  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:47.406463  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:47.467132  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:47.467169  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:47.482951  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:47.482977  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:47.547530  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:47.538747    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.539246    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.540864    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.541466    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.543147    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:47.538747    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.539246    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.540864    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.541466    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.543147    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:47.547599  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:47.547625  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:50.076734  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:50.088146  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:50.088221  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:50.114846  604010 cri.go:89] found id: ""
	I1213 11:55:50.114871  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.114879  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:50.114885  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:50.114952  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:50.140346  604010 cri.go:89] found id: ""
	I1213 11:55:50.140383  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.140393  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:50.140400  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:50.140461  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:50.165612  604010 cri.go:89] found id: ""
	I1213 11:55:50.165647  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.165656  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:50.165663  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:50.165735  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:50.193167  604010 cri.go:89] found id: ""
	I1213 11:55:50.193196  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.193205  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:50.193211  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:50.193288  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:50.217552  604010 cri.go:89] found id: ""
	I1213 11:55:50.217602  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.217622  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:50.217630  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:50.217703  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:50.243207  604010 cri.go:89] found id: ""
	I1213 11:55:50.243230  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.243240  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:50.243246  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:50.243306  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:50.267889  604010 cri.go:89] found id: ""
	I1213 11:55:50.267961  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.267980  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:50.267988  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:50.268050  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:50.293393  604010 cri.go:89] found id: ""
	I1213 11:55:50.293420  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.293429  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:50.293448  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:50.293461  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:50.358945  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:50.350414    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.351257    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.352886    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.353223    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.354777    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:50.350414    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.351257    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.352886    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.353223    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.354777    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:50.358967  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:50.358982  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:50.384886  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:50.384922  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:50.416671  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:50.416697  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:50.472398  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:50.472437  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:52.988724  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:53.000673  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:53.000825  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:53.028787  604010 cri.go:89] found id: ""
	I1213 11:55:53.028812  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.028822  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:53.028829  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:53.028960  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:53.059024  604010 cri.go:89] found id: ""
	I1213 11:55:53.059060  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.059069  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:53.059076  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:53.059137  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:53.084415  604010 cri.go:89] found id: ""
	I1213 11:55:53.084443  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.084452  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:53.084459  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:53.084519  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:53.111367  604010 cri.go:89] found id: ""
	I1213 11:55:53.111402  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.111413  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:53.111420  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:53.111485  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:53.138948  604010 cri.go:89] found id: ""
	I1213 11:55:53.138973  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.138992  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:53.138999  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:53.139058  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:53.164317  604010 cri.go:89] found id: ""
	I1213 11:55:53.164341  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.164350  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:53.164363  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:53.164420  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:53.189237  604010 cri.go:89] found id: ""
	I1213 11:55:53.189263  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.189284  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:53.189291  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:53.189365  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:53.213792  604010 cri.go:89] found id: ""
	I1213 11:55:53.213831  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.213840  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:53.213849  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:53.213864  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:53.268812  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:53.268852  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:53.284561  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:53.284592  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:53.350505  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:53.342240    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.342928    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.344529    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.345039    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.346717    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:53.342240    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.342928    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.344529    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.345039    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.346717    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:53.350528  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:53.350540  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:53.375550  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:53.375586  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:55.903770  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:55.916528  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:55.916606  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:55.974216  604010 cri.go:89] found id: ""
	I1213 11:55:55.974238  604010 logs.go:282] 0 containers: []
	W1213 11:55:55.974246  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:55.974254  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:55.974316  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:56.009212  604010 cri.go:89] found id: ""
	I1213 11:55:56.009235  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.009243  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:56.009250  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:56.009308  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:56.036696  604010 cri.go:89] found id: ""
	I1213 11:55:56.036722  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.036731  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:56.036738  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:56.036821  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:56.062550  604010 cri.go:89] found id: ""
	I1213 11:55:56.062577  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.062586  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:56.062592  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:56.062649  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:56.087384  604010 cri.go:89] found id: ""
	I1213 11:55:56.087410  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.087419  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:56.087425  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:56.087506  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:56.113129  604010 cri.go:89] found id: ""
	I1213 11:55:56.113153  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.113164  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:56.113171  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:56.113234  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:56.137999  604010 cri.go:89] found id: ""
	I1213 11:55:56.138021  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.138030  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:56.138036  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:56.138094  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:56.164815  604010 cri.go:89] found id: ""
	I1213 11:55:56.164841  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.164851  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:56.164861  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:56.164872  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:56.190007  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:56.190042  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:56.222068  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:56.222097  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:56.277067  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:56.277104  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:56.293465  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:56.293495  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:56.360755  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:56.351282    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.352626    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.353483    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.354403    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.356173    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:56.351282    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.352626    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.353483    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.354403    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.356173    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:58.861486  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:58.872284  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:58.872365  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:58.898051  604010 cri.go:89] found id: ""
	I1213 11:55:58.898077  604010 logs.go:282] 0 containers: []
	W1213 11:55:58.898086  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:58.898093  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:58.898152  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:58.937804  604010 cri.go:89] found id: ""
	I1213 11:55:58.937834  604010 logs.go:282] 0 containers: []
	W1213 11:55:58.937852  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:58.937865  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:58.937957  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:58.987256  604010 cri.go:89] found id: ""
	I1213 11:55:58.987290  604010 logs.go:282] 0 containers: []
	W1213 11:55:58.987301  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:58.987308  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:58.987378  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:59.018252  604010 cri.go:89] found id: ""
	I1213 11:55:59.018274  604010 logs.go:282] 0 containers: []
	W1213 11:55:59.018282  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:59.018289  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:59.018350  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:59.046993  604010 cri.go:89] found id: ""
	I1213 11:55:59.047018  604010 logs.go:282] 0 containers: []
	W1213 11:55:59.047027  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:59.047033  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:59.047089  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:59.072813  604010 cri.go:89] found id: ""
	I1213 11:55:59.072888  604010 logs.go:282] 0 containers: []
	W1213 11:55:59.072903  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:59.072913  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:59.072988  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:59.097766  604010 cri.go:89] found id: ""
	I1213 11:55:59.097792  604010 logs.go:282] 0 containers: []
	W1213 11:55:59.097801  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:59.097808  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:59.097868  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:59.125013  604010 cri.go:89] found id: ""
	I1213 11:55:59.125038  604010 logs.go:282] 0 containers: []
	W1213 11:55:59.125047  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:59.125056  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:59.125070  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:59.150130  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:59.150164  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:59.178033  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:59.178107  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:59.233761  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:59.233795  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:59.249736  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:59.249772  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:59.314577  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:59.305285    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.306134    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.307637    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.308126    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.310000    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:59.305285    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.306134    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.307637    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.308126    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.310000    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:01.814837  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:01.826268  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:01.826352  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:01.856935  604010 cri.go:89] found id: ""
	I1213 11:56:01.856960  604010 logs.go:282] 0 containers: []
	W1213 11:56:01.856969  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:01.856979  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:01.857039  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:01.884429  604010 cri.go:89] found id: ""
	I1213 11:56:01.884454  604010 logs.go:282] 0 containers: []
	W1213 11:56:01.884463  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:01.884470  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:01.884530  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:01.929790  604010 cri.go:89] found id: ""
	I1213 11:56:01.929812  604010 logs.go:282] 0 containers: []
	W1213 11:56:01.929821  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:01.929828  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:01.929890  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:01.997657  604010 cri.go:89] found id: ""
	I1213 11:56:01.997686  604010 logs.go:282] 0 containers: []
	W1213 11:56:01.997703  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:01.997713  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:01.997785  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:02.027667  604010 cri.go:89] found id: ""
	I1213 11:56:02.027692  604010 logs.go:282] 0 containers: []
	W1213 11:56:02.027701  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:02.027707  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:02.027770  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:02.052911  604010 cri.go:89] found id: ""
	I1213 11:56:02.052935  604010 logs.go:282] 0 containers: []
	W1213 11:56:02.052944  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:02.052950  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:02.053009  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:02.078744  604010 cri.go:89] found id: ""
	I1213 11:56:02.078813  604010 logs.go:282] 0 containers: []
	W1213 11:56:02.078839  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:02.078857  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:02.078946  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:02.104065  604010 cri.go:89] found id: ""
	I1213 11:56:02.104136  604010 logs.go:282] 0 containers: []
	W1213 11:56:02.104158  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:02.104181  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:02.104219  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:02.177602  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:02.166576    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.167162    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.170937    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.171543    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.173272    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:02.166576    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.167162    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.170937    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.171543    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.173272    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:02.177623  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:02.177635  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:02.203025  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:02.203064  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:02.232249  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:02.232275  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:02.288746  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:02.288781  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:04.806667  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:04.817452  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:04.817526  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:04.843671  604010 cri.go:89] found id: ""
	I1213 11:56:04.843696  604010 logs.go:282] 0 containers: []
	W1213 11:56:04.843705  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:04.843712  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:04.843770  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:04.869847  604010 cri.go:89] found id: ""
	I1213 11:56:04.869873  604010 logs.go:282] 0 containers: []
	W1213 11:56:04.869882  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:04.869889  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:04.869949  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:04.895727  604010 cri.go:89] found id: ""
	I1213 11:56:04.895750  604010 logs.go:282] 0 containers: []
	W1213 11:56:04.895759  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:04.895766  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:04.895874  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:04.958057  604010 cri.go:89] found id: ""
	I1213 11:56:04.958083  604010 logs.go:282] 0 containers: []
	W1213 11:56:04.958093  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:04.958102  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:04.958164  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:05.011151  604010 cri.go:89] found id: ""
	I1213 11:56:05.011180  604010 logs.go:282] 0 containers: []
	W1213 11:56:05.011191  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:05.011198  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:05.011301  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:05.042226  604010 cri.go:89] found id: ""
	I1213 11:56:05.042257  604010 logs.go:282] 0 containers: []
	W1213 11:56:05.042267  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:05.042274  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:05.042344  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:05.067033  604010 cri.go:89] found id: ""
	I1213 11:56:05.067057  604010 logs.go:282] 0 containers: []
	W1213 11:56:05.067066  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:05.067073  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:05.067137  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:05.092704  604010 cri.go:89] found id: ""
	I1213 11:56:05.092729  604010 logs.go:282] 0 containers: []
	W1213 11:56:05.092740  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:05.092751  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:05.092789  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:05.149091  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:05.149142  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:05.165497  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:05.165536  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:05.234289  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:05.225131    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.225892    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.227653    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.228318    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.230170    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:05.225131    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.225892    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.227653    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.228318    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.230170    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:05.234313  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:05.234326  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:05.259839  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:05.259877  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:07.795276  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:07.805797  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:07.805865  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:07.833431  604010 cri.go:89] found id: ""
	I1213 11:56:07.833458  604010 logs.go:282] 0 containers: []
	W1213 11:56:07.833467  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:07.833474  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:07.833533  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:07.859570  604010 cri.go:89] found id: ""
	I1213 11:56:07.859596  604010 logs.go:282] 0 containers: []
	W1213 11:56:07.859605  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:07.859612  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:07.859680  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:07.885597  604010 cri.go:89] found id: ""
	I1213 11:56:07.885621  604010 logs.go:282] 0 containers: []
	W1213 11:56:07.885630  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:07.885636  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:07.885693  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:07.932272  604010 cri.go:89] found id: ""
	I1213 11:56:07.932295  604010 logs.go:282] 0 containers: []
	W1213 11:56:07.932304  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:07.932311  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:07.932368  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:07.971123  604010 cri.go:89] found id: ""
	I1213 11:56:07.971146  604010 logs.go:282] 0 containers: []
	W1213 11:56:07.971156  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:07.971162  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:07.971223  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:08.020370  604010 cri.go:89] found id: ""
	I1213 11:56:08.020442  604010 logs.go:282] 0 containers: []
	W1213 11:56:08.020470  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:08.020488  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:08.020576  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:08.050772  604010 cri.go:89] found id: ""
	I1213 11:56:08.050843  604010 logs.go:282] 0 containers: []
	W1213 11:56:08.050870  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:08.050888  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:08.050977  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:08.076860  604010 cri.go:89] found id: ""
	I1213 11:56:08.076891  604010 logs.go:282] 0 containers: []
	W1213 11:56:08.076901  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:08.076911  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:08.076923  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:08.136737  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:08.136772  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:08.152700  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:08.152856  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:08.216955  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:08.208521    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.209263    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.210851    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.211330    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.212940    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:08.208521    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.209263    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.210851    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.211330    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.212940    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:08.217027  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:08.217055  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:08.242524  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:08.242562  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:10.774825  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:10.785504  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:10.785573  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:10.812402  604010 cri.go:89] found id: ""
	I1213 11:56:10.812424  604010 logs.go:282] 0 containers: []
	W1213 11:56:10.812433  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:10.812440  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:10.812495  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:10.837362  604010 cri.go:89] found id: ""
	I1213 11:56:10.837387  604010 logs.go:282] 0 containers: []
	W1213 11:56:10.837396  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:10.837402  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:10.837461  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:10.862348  604010 cri.go:89] found id: ""
	I1213 11:56:10.862374  604010 logs.go:282] 0 containers: []
	W1213 11:56:10.862382  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:10.862389  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:10.862447  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:10.886922  604010 cri.go:89] found id: ""
	I1213 11:56:10.886999  604010 logs.go:282] 0 containers: []
	W1213 11:56:10.887020  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:10.887038  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:10.887121  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:10.931278  604010 cri.go:89] found id: ""
	I1213 11:56:10.931347  604010 logs.go:282] 0 containers: []
	W1213 11:56:10.931369  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:10.931387  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:10.931475  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:10.974160  604010 cri.go:89] found id: ""
	I1213 11:56:10.974226  604010 logs.go:282] 0 containers: []
	W1213 11:56:10.974254  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:10.974272  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:10.974357  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:11.010218  604010 cri.go:89] found id: ""
	I1213 11:56:11.010290  604010 logs.go:282] 0 containers: []
	W1213 11:56:11.010313  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:11.010332  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:11.010424  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:11.039062  604010 cri.go:89] found id: ""
	I1213 11:56:11.039097  604010 logs.go:282] 0 containers: []
	W1213 11:56:11.039108  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:11.039118  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:11.039130  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:11.095996  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:11.096035  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:11.112552  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:11.112583  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:11.181416  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:11.172048    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.172697    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.174491    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.175376    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.177169    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:11.172048    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.172697    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.174491    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.175376    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.177169    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:11.181436  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:11.181451  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:11.206963  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:11.207000  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:13.739447  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:13.750286  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:13.750359  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:13.776350  604010 cri.go:89] found id: ""
	I1213 11:56:13.776379  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.776388  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:13.776395  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:13.776460  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:13.800680  604010 cri.go:89] found id: ""
	I1213 11:56:13.800705  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.800714  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:13.800721  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:13.800780  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:13.826000  604010 cri.go:89] found id: ""
	I1213 11:56:13.826038  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.826050  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:13.826072  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:13.826155  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:13.850538  604010 cri.go:89] found id: ""
	I1213 11:56:13.850564  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.850582  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:13.850611  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:13.850706  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:13.879462  604010 cri.go:89] found id: ""
	I1213 11:56:13.879488  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.879496  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:13.879503  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:13.879559  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:13.904388  604010 cri.go:89] found id: ""
	I1213 11:56:13.904414  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.904422  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:13.904432  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:13.904488  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:13.936193  604010 cri.go:89] found id: ""
	I1213 11:56:13.936221  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.936229  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:13.936236  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:13.936304  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:13.979520  604010 cri.go:89] found id: ""
	I1213 11:56:13.979547  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.979556  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:13.979566  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:13.979577  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:14.047872  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:14.047909  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:14.064531  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:14.064559  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:14.132145  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:14.123439    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.124184    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.125827    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.126337    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.128067    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:14.123439    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.124184    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.125827    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.126337    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.128067    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:14.132167  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:14.132180  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:14.158143  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:14.158181  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:16.686213  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:16.696766  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:16.696836  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:16.720811  604010 cri.go:89] found id: ""
	I1213 11:56:16.720840  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.720849  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:16.720856  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:16.720916  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:16.746135  604010 cri.go:89] found id: ""
	I1213 11:56:16.746162  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.746170  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:16.746177  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:16.746235  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:16.772135  604010 cri.go:89] found id: ""
	I1213 11:56:16.772162  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.772171  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:16.772177  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:16.772263  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:16.801712  604010 cri.go:89] found id: ""
	I1213 11:56:16.801738  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.801748  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:16.801754  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:16.801813  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:16.825625  604010 cri.go:89] found id: ""
	I1213 11:56:16.825649  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.825658  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:16.825664  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:16.825723  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:16.850464  604010 cri.go:89] found id: ""
	I1213 11:56:16.850490  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.850498  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:16.850505  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:16.850561  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:16.882804  604010 cri.go:89] found id: ""
	I1213 11:56:16.882826  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.882835  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:16.882848  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:16.882906  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:16.908046  604010 cri.go:89] found id: ""
	I1213 11:56:16.908071  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.908080  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:16.908090  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:16.908104  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:17.008503  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:17.008590  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:17.024851  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:17.024884  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:17.092834  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:17.083994    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.084849    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.086559    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.087267    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.088871    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:17.083994    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.084849    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.086559    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.087267    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.088871    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:17.092854  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:17.092867  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:17.118299  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:17.118334  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:19.647201  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:19.658196  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:19.658313  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:19.681845  604010 cri.go:89] found id: ""
	I1213 11:56:19.681924  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.681947  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:19.681966  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:19.682053  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:19.707693  604010 cri.go:89] found id: ""
	I1213 11:56:19.707717  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.707727  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:19.707733  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:19.707809  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:19.732762  604010 cri.go:89] found id: ""
	I1213 11:56:19.732788  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.732797  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:19.732804  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:19.732884  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:19.757359  604010 cri.go:89] found id: ""
	I1213 11:56:19.757393  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.757402  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:19.757423  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:19.757500  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:19.785446  604010 cri.go:89] found id: ""
	I1213 11:56:19.785473  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.785482  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:19.785489  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:19.785610  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:19.812583  604010 cri.go:89] found id: ""
	I1213 11:56:19.812607  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.812616  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:19.812623  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:19.812681  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:19.836875  604010 cri.go:89] found id: ""
	I1213 11:56:19.836901  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.836910  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:19.836919  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:19.837022  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:19.861557  604010 cri.go:89] found id: ""
	I1213 11:56:19.861584  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.861595  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:19.861610  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:19.861631  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:19.920472  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:19.920510  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:19.973429  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:19.973459  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:20.062908  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:20.053401    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.054064    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.055967    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.056677    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.058665    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:20.053401    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.054064    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.055967    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.056677    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.058665    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:20.062932  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:20.062945  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:20.089847  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:20.089889  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:22.621952  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:22.633355  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:22.633434  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:22.661131  604010 cri.go:89] found id: ""
	I1213 11:56:22.661156  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.661165  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:22.661172  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:22.661231  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:22.687274  604010 cri.go:89] found id: ""
	I1213 11:56:22.687309  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.687319  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:22.687325  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:22.687385  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:22.712134  604010 cri.go:89] found id: ""
	I1213 11:56:22.712162  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.712177  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:22.712184  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:22.712243  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:22.737658  604010 cri.go:89] found id: ""
	I1213 11:56:22.737684  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.737693  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:22.737699  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:22.737756  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:22.762933  604010 cri.go:89] found id: ""
	I1213 11:56:22.762958  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.762966  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:22.762973  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:22.763030  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:22.787428  604010 cri.go:89] found id: ""
	I1213 11:56:22.787453  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.787463  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:22.787469  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:22.787531  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:22.812716  604010 cri.go:89] found id: ""
	I1213 11:56:22.812746  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.812754  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:22.812761  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:22.812849  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:22.837817  604010 cri.go:89] found id: ""
	I1213 11:56:22.837844  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.837853  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:22.837863  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:22.837883  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:22.893260  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:22.893294  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:22.917278  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:22.917388  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:23.026082  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:23.017267    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.017959    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.019734    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.020131    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.021757    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:23.017267    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.017959    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.019734    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.020131    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.021757    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:23.026106  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:23.026120  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:23.052026  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:23.052065  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:25.580545  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:25.591333  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:25.591403  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:25.616731  604010 cri.go:89] found id: ""
	I1213 11:56:25.616754  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.616764  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:25.616771  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:25.616827  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:25.646111  604010 cri.go:89] found id: ""
	I1213 11:56:25.646135  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.646144  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:25.646151  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:25.646212  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:25.674261  604010 cri.go:89] found id: ""
	I1213 11:56:25.674284  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.674293  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:25.674300  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:25.674358  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:25.700613  604010 cri.go:89] found id: ""
	I1213 11:56:25.700636  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.700644  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:25.700650  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:25.700707  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:25.728704  604010 cri.go:89] found id: ""
	I1213 11:56:25.728789  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.728805  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:25.728818  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:25.728885  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:25.761516  604010 cri.go:89] found id: ""
	I1213 11:56:25.761538  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.761548  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:25.761555  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:25.761635  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:25.786867  604010 cri.go:89] found id: ""
	I1213 11:56:25.786895  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.786905  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:25.786911  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:25.786970  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:25.811462  604010 cri.go:89] found id: ""
	I1213 11:56:25.811485  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.811493  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:25.811503  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:25.811514  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:25.866924  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:25.866955  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:25.883500  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:25.883530  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:25.977779  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:25.966190    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.967164    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.969705    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.971514    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.972246    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:25.966190    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.967164    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.969705    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.971514    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.972246    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:25.977806  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:25.977819  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:26.009949  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:26.010030  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:28.542187  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:28.552481  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:28.552607  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:28.581578  604010 cri.go:89] found id: ""
	I1213 11:56:28.581611  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.581627  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:28.581634  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:28.581690  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:28.607125  604010 cri.go:89] found id: ""
	I1213 11:56:28.607149  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.607157  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:28.607163  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:28.607220  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:28.632720  604010 cri.go:89] found id: ""
	I1213 11:56:28.632747  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.632758  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:28.632765  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:28.632822  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:28.658222  604010 cri.go:89] found id: ""
	I1213 11:56:28.658251  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.658260  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:28.658267  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:28.658325  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:28.682387  604010 cri.go:89] found id: ""
	I1213 11:56:28.682425  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.682436  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:28.682443  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:28.682519  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:28.707965  604010 cri.go:89] found id: ""
	I1213 11:56:28.708001  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.708011  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:28.708024  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:28.708094  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:28.737087  604010 cri.go:89] found id: ""
	I1213 11:56:28.737115  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.737124  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:28.737130  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:28.737189  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:28.761982  604010 cri.go:89] found id: ""
	I1213 11:56:28.762059  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.762081  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:28.762108  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:28.762148  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:28.817649  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:28.817687  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:28.833874  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:28.833904  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:28.901287  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:28.892846    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.893499    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.895107    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.895608    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.897226    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:28.892846    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.893499    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.895107    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.895608    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.897226    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:28.901308  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:28.901319  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:28.943036  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:28.943114  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:31.504085  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:31.516702  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:31.516776  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:31.541829  604010 cri.go:89] found id: ""
	I1213 11:56:31.541852  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.541861  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:31.541868  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:31.541927  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:31.567128  604010 cri.go:89] found id: ""
	I1213 11:56:31.567153  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.567162  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:31.567169  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:31.567228  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:31.592889  604010 cri.go:89] found id: ""
	I1213 11:56:31.592914  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.592924  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:31.592931  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:31.592988  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:31.620810  604010 cri.go:89] found id: ""
	I1213 11:56:31.620834  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.620843  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:31.620850  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:31.620907  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:31.645931  604010 cri.go:89] found id: ""
	I1213 11:56:31.645958  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.645968  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:31.645975  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:31.646034  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:31.671037  604010 cri.go:89] found id: ""
	I1213 11:56:31.671065  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.671074  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:31.671116  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:31.671180  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:31.696779  604010 cri.go:89] found id: ""
	I1213 11:56:31.696805  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.696814  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:31.696820  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:31.696886  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:31.721074  604010 cri.go:89] found id: ""
	I1213 11:56:31.721152  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.721175  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:31.721198  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:31.721238  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:31.776685  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:31.776720  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:31.793212  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:31.793241  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:31.856954  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:31.848666   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.849288   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.850793   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.851220   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.852660   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:31.848666   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.849288   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.850793   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.851220   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.852660   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:31.857017  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:31.857044  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:31.882038  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:31.882070  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:34.425618  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:34.436018  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:34.436163  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:34.460322  604010 cri.go:89] found id: ""
	I1213 11:56:34.460347  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.460356  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:34.460362  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:34.460442  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:34.484514  604010 cri.go:89] found id: ""
	I1213 11:56:34.484582  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.484607  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:34.484622  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:34.484695  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:34.513969  604010 cri.go:89] found id: ""
	I1213 11:56:34.514006  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.514016  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:34.514023  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:34.514089  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:34.541219  604010 cri.go:89] found id: ""
	I1213 11:56:34.541245  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.541254  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:34.541260  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:34.541323  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:34.570631  604010 cri.go:89] found id: ""
	I1213 11:56:34.570653  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.570662  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:34.570668  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:34.570749  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:34.594597  604010 cri.go:89] found id: ""
	I1213 11:56:34.594636  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.594645  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:34.594651  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:34.594741  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:34.618131  604010 cri.go:89] found id: ""
	I1213 11:56:34.618159  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.618168  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:34.618174  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:34.618230  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:34.645177  604010 cri.go:89] found id: ""
	I1213 11:56:34.645204  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.645213  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:34.645223  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:34.645235  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:34.674203  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:34.674235  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:34.731298  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:34.731332  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:34.747591  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:34.747623  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:34.811066  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:34.802515   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.803209   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.804716   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.805051   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.806504   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:34.802515   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.803209   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.804716   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.805051   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.806504   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:34.811137  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:34.811171  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:37.342058  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:37.352580  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:37.352649  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:37.376663  604010 cri.go:89] found id: ""
	I1213 11:56:37.376689  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.376698  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:37.376704  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:37.376763  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:37.400694  604010 cri.go:89] found id: ""
	I1213 11:56:37.400720  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.400728  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:37.400735  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:37.400796  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:37.425687  604010 cri.go:89] found id: ""
	I1213 11:56:37.425715  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.425724  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:37.425730  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:37.425787  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:37.450160  604010 cri.go:89] found id: ""
	I1213 11:56:37.450189  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.450198  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:37.450205  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:37.450266  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:37.475110  604010 cri.go:89] found id: ""
	I1213 11:56:37.475133  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.475142  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:37.475149  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:37.475207  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:37.499102  604010 cri.go:89] found id: ""
	I1213 11:56:37.499171  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.499196  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:37.499207  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:37.499282  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:37.528584  604010 cri.go:89] found id: ""
	I1213 11:56:37.528609  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.528618  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:37.528624  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:37.528708  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:37.554175  604010 cri.go:89] found id: ""
	I1213 11:56:37.554259  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.554283  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:37.554304  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:37.554347  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:37.612670  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:37.612706  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:37.629187  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:37.629218  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:37.694612  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:37.685617   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.686619   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.688268   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.688681   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.690407   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:37.685617   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.686619   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.688268   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.688681   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.690407   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:37.694640  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:37.694653  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:37.719952  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:37.719988  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:40.252201  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:40.265281  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:40.265368  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:40.289761  604010 cri.go:89] found id: ""
	I1213 11:56:40.289841  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.289865  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:40.289885  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:40.289969  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:40.314886  604010 cri.go:89] found id: ""
	I1213 11:56:40.314911  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.314920  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:40.314928  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:40.314988  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:40.340433  604010 cri.go:89] found id: ""
	I1213 11:56:40.340460  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.340469  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:40.340475  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:40.340535  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:40.369630  604010 cri.go:89] found id: ""
	I1213 11:56:40.369657  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.369666  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:40.369672  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:40.369730  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:40.396456  604010 cri.go:89] found id: ""
	I1213 11:56:40.396480  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.396489  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:40.396495  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:40.396550  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:40.420915  604010 cri.go:89] found id: ""
	I1213 11:56:40.420982  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.420996  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:40.421004  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:40.421067  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:40.445305  604010 cri.go:89] found id: ""
	I1213 11:56:40.445339  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.445349  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:40.445355  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:40.445423  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:40.470359  604010 cri.go:89] found id: ""
	I1213 11:56:40.470396  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.470406  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:40.470415  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:40.470428  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:40.529991  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:40.530029  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:40.545704  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:40.545785  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:40.614385  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:40.605002   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.605654   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.608020   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.608670   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.609867   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:40.605002   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.605654   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.608020   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.608670   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.609867   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:40.614411  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:40.614423  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:40.640189  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:40.640226  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:43.171206  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:43.187532  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:43.187604  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:43.255773  604010 cri.go:89] found id: ""
	I1213 11:56:43.255816  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.255826  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:43.255833  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:43.255893  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:43.282066  604010 cri.go:89] found id: ""
	I1213 11:56:43.282095  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.282104  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:43.282110  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:43.282169  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:43.307994  604010 cri.go:89] found id: ""
	I1213 11:56:43.308022  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.308031  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:43.308037  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:43.308094  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:43.333649  604010 cri.go:89] found id: ""
	I1213 11:56:43.333682  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.333692  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:43.333699  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:43.333761  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:43.364007  604010 cri.go:89] found id: ""
	I1213 11:56:43.364037  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.364045  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:43.364052  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:43.364110  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:43.389343  604010 cri.go:89] found id: ""
	I1213 11:56:43.389381  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.389389  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:43.389396  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:43.389466  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:43.414572  604010 cri.go:89] found id: ""
	I1213 11:56:43.414608  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.414618  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:43.414624  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:43.414711  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:43.439971  604010 cri.go:89] found id: ""
	I1213 11:56:43.439999  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.440008  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:43.440018  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:43.440034  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:43.455350  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:43.455380  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:43.518971  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:43.510133   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.510875   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.512575   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.513204   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.514989   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:43.510133   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.510875   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.512575   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.513204   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.514989   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:43.519004  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:43.519017  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:43.543826  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:43.543863  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:43.571534  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:43.571561  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:46.127908  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:46.138548  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:46.138627  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:46.177176  604010 cri.go:89] found id: ""
	I1213 11:56:46.177205  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.177214  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:46.177220  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:46.177280  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:46.250872  604010 cri.go:89] found id: ""
	I1213 11:56:46.250897  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.250906  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:46.250913  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:46.250972  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:46.276982  604010 cri.go:89] found id: ""
	I1213 11:56:46.277008  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.277020  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:46.277026  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:46.277086  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:46.308722  604010 cri.go:89] found id: ""
	I1213 11:56:46.308745  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.308754  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:46.308760  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:46.308819  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:46.333457  604010 cri.go:89] found id: ""
	I1213 11:56:46.333479  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.333488  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:46.333495  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:46.333551  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:46.361010  604010 cri.go:89] found id: ""
	I1213 11:56:46.361034  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.361042  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:46.361049  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:46.361107  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:46.385580  604010 cri.go:89] found id: ""
	I1213 11:56:46.385608  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.385625  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:46.385631  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:46.385689  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:46.410013  604010 cri.go:89] found id: ""
	I1213 11:56:46.410041  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.410050  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:46.410059  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:46.410071  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:46.474489  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:46.465232   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.465851   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.467612   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.468248   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.469990   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:46.465232   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.465851   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.467612   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.468248   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.469990   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:46.474512  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:46.474525  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:46.499926  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:46.499961  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:46.529519  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:46.529543  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:46.585780  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:46.585816  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:49.102338  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:49.113041  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:49.113164  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:49.137484  604010 cri.go:89] found id: ""
	I1213 11:56:49.137527  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.137536  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:49.137543  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:49.137633  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:49.176305  604010 cri.go:89] found id: ""
	I1213 11:56:49.176345  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.176354  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:49.176360  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:49.176445  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:49.216965  604010 cri.go:89] found id: ""
	I1213 11:56:49.216992  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.217001  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:49.217007  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:49.217076  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:49.262147  604010 cri.go:89] found id: ""
	I1213 11:56:49.262226  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.262256  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:49.262277  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:49.262367  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:49.292097  604010 cri.go:89] found id: ""
	I1213 11:56:49.292124  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.292133  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:49.292140  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:49.292195  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:49.316193  604010 cri.go:89] found id: ""
	I1213 11:56:49.316219  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.316228  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:49.316235  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:49.316293  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:49.341385  604010 cri.go:89] found id: ""
	I1213 11:56:49.341411  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.341421  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:49.341434  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:49.341503  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:49.365851  604010 cri.go:89] found id: ""
	I1213 11:56:49.365874  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.365883  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:49.365892  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:49.365903  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:49.381508  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:49.381537  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:49.444383  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:49.436163   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.436758   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.438415   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.438958   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.440549   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:49.436163   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.436758   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.438415   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.438958   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.440549   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:49.444406  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:49.444419  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:49.469593  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:49.469636  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:49.497881  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:49.497912  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:52.053968  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:52.065301  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:52.065418  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:52.096894  604010 cri.go:89] found id: ""
	I1213 11:56:52.096966  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.096988  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:52.097007  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:52.097097  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:52.124148  604010 cri.go:89] found id: ""
	I1213 11:56:52.124173  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.124186  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:52.124193  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:52.124306  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:52.160416  604010 cri.go:89] found id: ""
	I1213 11:56:52.160439  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.160448  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:52.160455  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:52.160513  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:52.200069  604010 cri.go:89] found id: ""
	I1213 11:56:52.200095  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.200104  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:52.200111  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:52.200174  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:52.263224  604010 cri.go:89] found id: ""
	I1213 11:56:52.263295  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.263310  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:52.263318  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:52.263375  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:52.288649  604010 cri.go:89] found id: ""
	I1213 11:56:52.288675  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.288684  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:52.288691  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:52.288754  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:52.316561  604010 cri.go:89] found id: ""
	I1213 11:56:52.316588  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.316596  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:52.316603  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:52.316660  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:52.341885  604010 cri.go:89] found id: ""
	I1213 11:56:52.341909  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.341918  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:52.341927  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:52.341938  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:52.397001  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:52.397038  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:52.415607  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:52.415635  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:52.493248  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:52.484194   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.484676   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.486433   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.486904   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.488650   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:52.484194   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.484676   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.486433   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.486904   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.488650   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:52.493274  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:52.493288  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:52.518551  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:52.518588  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:55.047907  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:55.059302  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:55.059421  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:55.085237  604010 cri.go:89] found id: ""
	I1213 11:56:55.085271  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.085281  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:55.085288  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:55.085362  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:55.112434  604010 cri.go:89] found id: ""
	I1213 11:56:55.112462  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.112475  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:55.112482  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:55.112544  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:55.138067  604010 cri.go:89] found id: ""
	I1213 11:56:55.138101  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.138110  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:55.138117  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:55.138184  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:55.179401  604010 cri.go:89] found id: ""
	I1213 11:56:55.179522  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.179548  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:55.179588  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:55.179766  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:55.234369  604010 cri.go:89] found id: ""
	I1213 11:56:55.234462  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.234499  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:55.234544  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:55.234676  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:55.277189  604010 cri.go:89] found id: ""
	I1213 11:56:55.277271  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.277294  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:55.277314  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:55.277416  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:55.310856  604010 cri.go:89] found id: ""
	I1213 11:56:55.310933  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.310949  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:55.310958  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:55.311020  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:55.337357  604010 cri.go:89] found id: ""
	I1213 11:56:55.337453  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.337468  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:55.337478  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:55.337490  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:55.392569  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:55.392607  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:55.408576  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:55.408608  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:55.471726  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:55.463854   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.464422   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.465928   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.466440   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.467966   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:55.463854   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.464422   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.465928   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.466440   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.467966   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:55.471749  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:55.471762  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:55.497230  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:55.497266  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:58.026521  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:58.040495  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:58.040579  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:58.067542  604010 cri.go:89] found id: ""
	I1213 11:56:58.067567  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.067576  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:58.067583  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:58.067649  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:58.092616  604010 cri.go:89] found id: ""
	I1213 11:56:58.092642  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.092651  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:58.092657  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:58.092714  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:58.117533  604010 cri.go:89] found id: ""
	I1213 11:56:58.117561  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.117572  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:58.117578  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:58.117669  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:58.143441  604010 cri.go:89] found id: ""
	I1213 11:56:58.143465  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.143474  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:58.143481  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:58.143540  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:58.191063  604010 cri.go:89] found id: ""
	I1213 11:56:58.191086  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.191096  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:58.191102  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:58.191175  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:58.233666  604010 cri.go:89] found id: ""
	I1213 11:56:58.233709  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.233727  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:58.233734  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:58.233805  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:58.285997  604010 cri.go:89] found id: ""
	I1213 11:56:58.286020  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.286029  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:58.286035  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:58.286099  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:58.313519  604010 cri.go:89] found id: ""
	I1213 11:56:58.313544  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.313553  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:58.313570  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:58.313581  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:58.372174  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:58.372208  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:58.387775  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:58.387803  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:58.457676  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:58.448571   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.449279   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.451118   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.451644   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.453241   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:58.448571   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.449279   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.451118   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.451644   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.453241   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:58.457698  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:58.457711  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:58.482922  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:58.482956  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:01.016291  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:01.027467  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:01.027540  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:01.061002  604010 cri.go:89] found id: ""
	I1213 11:57:01.061026  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.061035  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:01.061041  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:01.061099  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:01.090375  604010 cri.go:89] found id: ""
	I1213 11:57:01.090403  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.090412  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:01.090418  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:01.090476  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:01.118417  604010 cri.go:89] found id: ""
	I1213 11:57:01.118441  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.118450  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:01.118456  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:01.118521  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:01.147901  604010 cri.go:89] found id: ""
	I1213 11:57:01.147929  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.147938  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:01.147946  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:01.148009  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:01.207604  604010 cri.go:89] found id: ""
	I1213 11:57:01.207681  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.207708  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:01.207727  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:01.207818  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:01.263340  604010 cri.go:89] found id: ""
	I1213 11:57:01.263407  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.263428  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:01.263446  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:01.263531  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:01.296139  604010 cri.go:89] found id: ""
	I1213 11:57:01.296213  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.296231  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:01.296242  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:01.296313  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:01.323150  604010 cri.go:89] found id: ""
	I1213 11:57:01.323175  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.323185  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:01.323194  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:01.323206  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:01.351631  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:01.351659  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:01.410361  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:01.410398  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:01.426884  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:01.426921  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:01.495923  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:01.487940   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.488738   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.490397   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.490777   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.492041   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:01.487940   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.488738   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.490397   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.490777   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.492041   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:01.495947  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:01.495960  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:04.023306  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:04.034376  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:04.034451  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:04.058883  604010 cri.go:89] found id: ""
	I1213 11:57:04.058911  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.058921  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:04.058929  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:04.058990  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:04.084571  604010 cri.go:89] found id: ""
	I1213 11:57:04.084598  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.084607  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:04.084615  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:04.084698  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:04.111492  604010 cri.go:89] found id: ""
	I1213 11:57:04.111518  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.111527  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:04.111534  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:04.111594  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:04.140605  604010 cri.go:89] found id: ""
	I1213 11:57:04.140632  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.140641  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:04.140648  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:04.140709  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:04.170556  604010 cri.go:89] found id: ""
	I1213 11:57:04.170583  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.170592  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:04.170598  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:04.170654  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:04.221024  604010 cri.go:89] found id: ""
	I1213 11:57:04.221047  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.221056  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:04.221062  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:04.221120  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:04.258557  604010 cri.go:89] found id: ""
	I1213 11:57:04.258583  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.258601  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:04.258608  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:04.258667  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:04.286096  604010 cri.go:89] found id: ""
	I1213 11:57:04.286121  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.286130  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:04.286140  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:04.286154  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:04.342856  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:04.342892  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:04.359212  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:04.359247  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:04.426841  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:04.417916   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.418505   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.420627   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.421110   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.422742   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:04.417916   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.418505   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.420627   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.421110   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.422742   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:04.426863  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:04.426876  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:04.452958  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:04.452999  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:06.985291  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:06.996435  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:06.996506  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:07.027757  604010 cri.go:89] found id: ""
	I1213 11:57:07.027792  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.027802  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:07.027808  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:07.027875  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:07.053033  604010 cri.go:89] found id: ""
	I1213 11:57:07.053059  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.053068  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:07.053075  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:07.053135  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:07.077293  604010 cri.go:89] found id: ""
	I1213 11:57:07.077320  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.077330  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:07.077336  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:07.077400  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:07.101590  604010 cri.go:89] found id: ""
	I1213 11:57:07.101615  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.101630  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:07.101636  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:07.101693  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:07.129837  604010 cri.go:89] found id: ""
	I1213 11:57:07.129867  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.129877  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:07.129883  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:07.129943  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:07.155693  604010 cri.go:89] found id: ""
	I1213 11:57:07.155719  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.155729  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:07.155735  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:07.155799  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:07.208290  604010 cri.go:89] found id: ""
	I1213 11:57:07.208318  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.208327  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:07.208334  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:07.208398  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:07.260450  604010 cri.go:89] found id: ""
	I1213 11:57:07.260475  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.260485  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:07.260494  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:07.260505  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:07.317882  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:07.317918  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:07.334495  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:07.334524  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:07.403490  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:07.393965   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.394975   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.396603   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.397190   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.398983   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:07.393965   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.394975   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.396603   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.397190   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.398983   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:07.403516  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:07.403531  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:07.428864  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:07.428901  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:09.962852  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:09.973890  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:09.973963  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:10.008764  604010 cri.go:89] found id: ""
	I1213 11:57:10.008791  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.008801  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:10.008808  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:10.008881  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:10.042627  604010 cri.go:89] found id: ""
	I1213 11:57:10.042655  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.042667  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:10.042674  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:10.042762  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:10.070196  604010 cri.go:89] found id: ""
	I1213 11:57:10.070222  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.070231  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:10.070238  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:10.070304  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:10.097458  604010 cri.go:89] found id: ""
	I1213 11:57:10.097484  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.097493  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:10.097500  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:10.097559  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:10.124061  604010 cri.go:89] found id: ""
	I1213 11:57:10.124087  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.124095  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:10.124101  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:10.124158  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:10.153659  604010 cri.go:89] found id: ""
	I1213 11:57:10.153696  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.153705  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:10.153713  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:10.153792  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:10.226910  604010 cri.go:89] found id: ""
	I1213 11:57:10.226938  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.226947  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:10.226953  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:10.227010  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:10.265652  604010 cri.go:89] found id: ""
	I1213 11:57:10.265676  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.265685  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:10.265695  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:10.265707  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:10.332797  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:10.323569   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.325115   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.325998   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.326908   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.328530   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:10.323569   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.325115   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.325998   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.326908   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.328530   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:10.332820  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:10.332832  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:10.357553  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:10.357592  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:10.391809  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:10.391838  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:10.447255  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:10.447293  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:12.963670  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:12.974670  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:12.974767  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:13.006230  604010 cri.go:89] found id: ""
	I1213 11:57:13.006259  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.006268  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:13.006275  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:13.006340  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:13.031301  604010 cri.go:89] found id: ""
	I1213 11:57:13.031325  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.031334  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:13.031340  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:13.031396  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:13.055897  604010 cri.go:89] found id: ""
	I1213 11:57:13.055927  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.055936  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:13.055942  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:13.056003  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:13.081708  604010 cri.go:89] found id: ""
	I1213 11:57:13.081733  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.081748  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:13.081755  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:13.081812  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:13.111812  604010 cri.go:89] found id: ""
	I1213 11:57:13.111885  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.111900  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:13.111909  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:13.111971  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:13.136957  604010 cri.go:89] found id: ""
	I1213 11:57:13.136992  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.137001  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:13.137025  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:13.137099  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:13.180320  604010 cri.go:89] found id: ""
	I1213 11:57:13.180354  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.180363  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:13.180370  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:13.180438  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:13.232992  604010 cri.go:89] found id: ""
	I1213 11:57:13.233027  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.233037  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:13.233047  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:13.233060  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:13.306234  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:13.297958   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.298476   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.299586   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.299955   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.301394   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:13.297958   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.298476   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.299586   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.299955   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.301394   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:13.306257  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:13.306272  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:13.331798  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:13.331837  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:13.364219  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:13.364248  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:13.419158  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:13.419191  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:15.935716  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:15.946701  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:15.946796  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:15.972298  604010 cri.go:89] found id: ""
	I1213 11:57:15.972375  604010 logs.go:282] 0 containers: []
	W1213 11:57:15.972392  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:15.972399  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:15.972468  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:15.997435  604010 cri.go:89] found id: ""
	I1213 11:57:15.997458  604010 logs.go:282] 0 containers: []
	W1213 11:57:15.997467  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:15.997474  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:15.997540  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:16.026069  604010 cri.go:89] found id: ""
	I1213 11:57:16.026107  604010 logs.go:282] 0 containers: []
	W1213 11:57:16.026116  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:16.026123  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:16.026190  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:16.051047  604010 cri.go:89] found id: ""
	I1213 11:57:16.051125  604010 logs.go:282] 0 containers: []
	W1213 11:57:16.051141  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:16.051149  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:16.051209  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:16.076992  604010 cri.go:89] found id: ""
	I1213 11:57:16.077060  604010 logs.go:282] 0 containers: []
	W1213 11:57:16.077086  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:16.077104  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:16.077190  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:16.104719  604010 cri.go:89] found id: ""
	I1213 11:57:16.104788  604010 logs.go:282] 0 containers: []
	W1213 11:57:16.104811  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:16.104830  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:16.104918  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:16.136668  604010 cri.go:89] found id: ""
	I1213 11:57:16.136696  604010 logs.go:282] 0 containers: []
	W1213 11:57:16.136705  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:16.136712  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:16.136772  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:16.184065  604010 cri.go:89] found id: ""
	I1213 11:57:16.184100  604010 logs.go:282] 0 containers: []
	W1213 11:57:16.184111  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:16.184120  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:16.184153  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:16.270928  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:16.270968  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:16.287140  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:16.287175  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:16.357398  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:16.349038   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.349516   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.351357   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.351864   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.353557   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:16.349038   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.349516   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.351357   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.351864   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.353557   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:16.357423  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:16.357435  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:16.381740  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:16.381774  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:18.910619  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:18.921087  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:18.921166  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:18.946478  604010 cri.go:89] found id: ""
	I1213 11:57:18.946503  604010 logs.go:282] 0 containers: []
	W1213 11:57:18.946512  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:18.946519  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:18.946578  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:18.971279  604010 cri.go:89] found id: ""
	I1213 11:57:18.971304  604010 logs.go:282] 0 containers: []
	W1213 11:57:18.971313  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:18.971320  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:18.971378  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:18.996033  604010 cri.go:89] found id: ""
	I1213 11:57:18.996059  604010 logs.go:282] 0 containers: []
	W1213 11:57:18.996068  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:18.996074  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:18.996158  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:19.021977  604010 cri.go:89] found id: ""
	I1213 11:57:19.022006  604010 logs.go:282] 0 containers: []
	W1213 11:57:19.022015  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:19.022024  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:19.022086  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:19.046193  604010 cri.go:89] found id: ""
	I1213 11:57:19.046221  604010 logs.go:282] 0 containers: []
	W1213 11:57:19.046230  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:19.046236  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:19.046297  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:19.070868  604010 cri.go:89] found id: ""
	I1213 11:57:19.070895  604010 logs.go:282] 0 containers: []
	W1213 11:57:19.070904  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:19.070911  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:19.071001  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:19.096253  604010 cri.go:89] found id: ""
	I1213 11:57:19.096276  604010 logs.go:282] 0 containers: []
	W1213 11:57:19.096285  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:19.096292  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:19.096373  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:19.121131  604010 cri.go:89] found id: ""
	I1213 11:57:19.121167  604010 logs.go:282] 0 containers: []
	W1213 11:57:19.121177  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:19.121186  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:19.121216  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:19.208507  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:19.190547   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.191444   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.193889   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.194572   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.199234   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:19.190547   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.191444   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.193889   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.194572   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.199234   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:19.208539  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:19.208553  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:19.237572  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:19.237656  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:19.276423  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:19.276448  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:19.334610  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:19.334648  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:21.851744  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:21.861936  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:21.861999  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:21.885880  604010 cri.go:89] found id: ""
	I1213 11:57:21.885901  604010 logs.go:282] 0 containers: []
	W1213 11:57:21.885909  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:21.885916  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:21.885971  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:21.909866  604010 cri.go:89] found id: ""
	I1213 11:57:21.909889  604010 logs.go:282] 0 containers: []
	W1213 11:57:21.909898  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:21.909904  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:21.909961  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:21.934547  604010 cri.go:89] found id: ""
	I1213 11:57:21.934576  604010 logs.go:282] 0 containers: []
	W1213 11:57:21.934585  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:21.934591  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:21.934651  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:21.959889  604010 cri.go:89] found id: ""
	I1213 11:57:21.959915  604010 logs.go:282] 0 containers: []
	W1213 11:57:21.959925  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:21.959932  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:21.959988  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:21.989023  604010 cri.go:89] found id: ""
	I1213 11:57:21.989099  604010 logs.go:282] 0 containers: []
	W1213 11:57:21.989134  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:21.989159  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:21.989243  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:22.019806  604010 cri.go:89] found id: ""
	I1213 11:57:22.019848  604010 logs.go:282] 0 containers: []
	W1213 11:57:22.019861  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:22.019868  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:22.019934  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:22.044814  604010 cri.go:89] found id: ""
	I1213 11:57:22.044841  604010 logs.go:282] 0 containers: []
	W1213 11:57:22.044852  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:22.044858  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:22.044923  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:22.074682  604010 cri.go:89] found id: ""
	I1213 11:57:22.074726  604010 logs.go:282] 0 containers: []
	W1213 11:57:22.074735  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:22.074745  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:22.074757  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:22.150025  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:22.141291   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.141746   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.143484   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.144157   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.146009   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:22.141291   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.141746   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.143484   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.144157   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.146009   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:22.150049  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:22.150062  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:22.178881  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:22.178917  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:22.216709  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:22.216740  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:22.281457  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:22.281489  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:24.798312  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:24.808695  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:24.808764  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:24.835809  604010 cri.go:89] found id: ""
	I1213 11:57:24.835839  604010 logs.go:282] 0 containers: []
	W1213 11:57:24.835848  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:24.835855  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:24.835913  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:24.864535  604010 cri.go:89] found id: ""
	I1213 11:57:24.864560  604010 logs.go:282] 0 containers: []
	W1213 11:57:24.864568  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:24.864574  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:24.864630  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:24.894267  604010 cri.go:89] found id: ""
	I1213 11:57:24.894290  604010 logs.go:282] 0 containers: []
	W1213 11:57:24.894299  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:24.894305  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:24.894364  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:24.923204  604010 cri.go:89] found id: ""
	I1213 11:57:24.923237  604010 logs.go:282] 0 containers: []
	W1213 11:57:24.923248  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:24.923254  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:24.923313  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:24.957663  604010 cri.go:89] found id: ""
	I1213 11:57:24.957689  604010 logs.go:282] 0 containers: []
	W1213 11:57:24.957698  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:24.957705  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:24.957786  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:24.982499  604010 cri.go:89] found id: ""
	I1213 11:57:24.982524  604010 logs.go:282] 0 containers: []
	W1213 11:57:24.982533  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:24.982539  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:24.982596  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:25.013305  604010 cri.go:89] found id: ""
	I1213 11:57:25.013332  604010 logs.go:282] 0 containers: []
	W1213 11:57:25.013342  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:25.013348  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:25.013426  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:25.042403  604010 cri.go:89] found id: ""
	I1213 11:57:25.042429  604010 logs.go:282] 0 containers: []
	W1213 11:57:25.042440  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:25.042450  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:25.042462  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:25.110074  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:25.100728   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.101372   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.103156   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.103840   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.106138   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:25.100728   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.101372   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.103156   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.103840   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.106138   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:25.110097  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:25.110109  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:25.136135  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:25.136175  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:25.187750  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:25.187781  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:25.269417  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:25.269496  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:27.795410  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:27.806308  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:27.806393  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:27.833178  604010 cri.go:89] found id: ""
	I1213 11:57:27.833204  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.833213  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:27.833220  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:27.833280  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:27.864759  604010 cri.go:89] found id: ""
	I1213 11:57:27.864790  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.864800  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:27.864807  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:27.864870  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:27.894576  604010 cri.go:89] found id: ""
	I1213 11:57:27.894643  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.894668  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:27.894722  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:27.894809  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:27.919695  604010 cri.go:89] found id: ""
	I1213 11:57:27.919720  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.919728  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:27.919735  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:27.919809  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:27.944128  604010 cri.go:89] found id: ""
	I1213 11:57:27.944152  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.944161  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:27.944168  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:27.944247  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:27.968369  604010 cri.go:89] found id: ""
	I1213 11:57:27.968393  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.968402  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:27.968409  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:27.968507  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:27.997345  604010 cri.go:89] found id: ""
	I1213 11:57:27.997372  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.997381  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:27.997388  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:27.997451  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:28.029787  604010 cri.go:89] found id: ""
	I1213 11:57:28.029815  604010 logs.go:282] 0 containers: []
	W1213 11:57:28.029825  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:28.029837  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:28.029851  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:28.059897  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:28.059930  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:28.116398  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:28.116433  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:28.133239  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:28.133269  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:28.257725  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:28.249038   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.249625   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.251202   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.251730   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.253377   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:28.249038   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.249625   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.251202   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.251730   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.253377   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:28.257746  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:28.257758  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:30.784544  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:30.795049  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:30.795122  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:30.819394  604010 cri.go:89] found id: ""
	I1213 11:57:30.819419  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.819427  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:30.819434  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:30.819491  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:30.843159  604010 cri.go:89] found id: ""
	I1213 11:57:30.843184  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.843193  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:30.843199  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:30.843254  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:30.869845  604010 cri.go:89] found id: ""
	I1213 11:57:30.869867  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.869876  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:30.869885  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:30.869941  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:30.896812  604010 cri.go:89] found id: ""
	I1213 11:57:30.896836  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.896845  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:30.896853  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:30.896913  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:30.921770  604010 cri.go:89] found id: ""
	I1213 11:57:30.921794  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.921804  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:30.921810  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:30.921867  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:30.948842  604010 cri.go:89] found id: ""
	I1213 11:57:30.948869  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.948878  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:30.948885  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:30.948941  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:30.975761  604010 cri.go:89] found id: ""
	I1213 11:57:30.975785  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.975794  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:30.975800  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:30.975861  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:31.009297  604010 cri.go:89] found id: ""
	I1213 11:57:31.009324  604010 logs.go:282] 0 containers: []
	W1213 11:57:31.009333  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:31.009344  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:31.009357  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:31.026148  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:31.026228  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:31.092501  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:31.083099   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.083809   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.085589   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.086335   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.087969   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:31.083099   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.083809   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.085589   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.086335   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.087969   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:31.092527  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:31.092540  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:31.119062  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:31.119100  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:31.148109  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:31.148140  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:33.733415  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:33.744879  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:33.744947  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:33.769975  604010 cri.go:89] found id: ""
	I1213 11:57:33.770002  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.770012  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:33.770019  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:33.770118  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:33.795564  604010 cri.go:89] found id: ""
	I1213 11:57:33.795587  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.795595  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:33.795602  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:33.795658  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:33.820165  604010 cri.go:89] found id: ""
	I1213 11:57:33.820189  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.820197  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:33.820205  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:33.820266  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:33.850474  604010 cri.go:89] found id: ""
	I1213 11:57:33.850496  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.850504  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:33.850511  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:33.850571  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:33.875577  604010 cri.go:89] found id: ""
	I1213 11:57:33.875599  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.875613  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:33.875620  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:33.875676  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:33.899672  604010 cri.go:89] found id: ""
	I1213 11:57:33.899696  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.899704  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:33.899711  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:33.899771  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:33.924330  604010 cri.go:89] found id: ""
	I1213 11:57:33.924353  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.924363  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:33.924369  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:33.924426  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:33.948447  604010 cri.go:89] found id: ""
	I1213 11:57:33.948470  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.948479  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:33.948489  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:33.948500  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:34.007962  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:34.008002  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:34.025302  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:34.025333  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:34.092523  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:34.083642   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.084406   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.086056   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.086792   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.088528   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:34.083642   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.084406   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.086056   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.086792   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.088528   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:34.092559  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:34.092571  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:34.118672  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:34.118743  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:36.651173  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:36.662055  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:36.662135  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:36.690956  604010 cri.go:89] found id: ""
	I1213 11:57:36.690981  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.690990  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:36.690997  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:36.691067  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:36.716966  604010 cri.go:89] found id: ""
	I1213 11:57:36.716989  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.716998  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:36.717004  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:36.717063  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:36.741609  604010 cri.go:89] found id: ""
	I1213 11:57:36.741651  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.741661  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:36.741667  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:36.741736  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:36.766862  604010 cri.go:89] found id: ""
	I1213 11:57:36.766898  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.766907  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:36.766914  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:36.766978  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:36.792075  604010 cri.go:89] found id: ""
	I1213 11:57:36.792103  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.792112  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:36.792119  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:36.792198  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:36.817506  604010 cri.go:89] found id: ""
	I1213 11:57:36.817540  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.817549  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:36.817558  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:36.817624  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:36.842603  604010 cri.go:89] found id: ""
	I1213 11:57:36.842627  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.842635  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:36.842641  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:36.842721  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:36.868253  604010 cri.go:89] found id: ""
	I1213 11:57:36.868276  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.868286  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:36.868295  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:36.868307  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:36.925033  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:36.925067  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:36.941121  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:36.941202  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:37.010945  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:36.998940   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.000295   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.000838   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.002747   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.003200   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:36.998940   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.000295   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.000838   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.002747   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.003200   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:37.010971  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:37.010986  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:37.039679  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:37.039717  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:39.569521  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:39.580209  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:39.580283  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:39.607577  604010 cri.go:89] found id: ""
	I1213 11:57:39.607609  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.607618  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:39.607625  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:39.607684  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:39.632984  604010 cri.go:89] found id: ""
	I1213 11:57:39.633007  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.633016  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:39.633022  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:39.633079  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:39.660977  604010 cri.go:89] found id: ""
	I1213 11:57:39.661006  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.661016  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:39.661022  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:39.661083  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:39.685387  604010 cri.go:89] found id: ""
	I1213 11:57:39.685414  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.685423  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:39.685430  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:39.685488  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:39.711315  604010 cri.go:89] found id: ""
	I1213 11:57:39.711354  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.711364  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:39.711370  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:39.711434  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:39.736665  604010 cri.go:89] found id: ""
	I1213 11:57:39.736691  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.736700  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:39.736707  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:39.736765  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:39.761215  604010 cri.go:89] found id: ""
	I1213 11:57:39.761240  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.761250  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:39.761257  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:39.761317  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:39.785612  604010 cri.go:89] found id: ""
	I1213 11:57:39.785635  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.785667  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:39.785677  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:39.785688  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:39.818169  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:39.818198  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:39.876172  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:39.876207  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:39.893614  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:39.893697  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:39.961561  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:39.953062   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.953798   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.955462   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.955793   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.957262   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:39.953062   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.953798   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.955462   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.955793   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.957262   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:39.961582  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:39.961598  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:42.487536  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:42.498423  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:42.498495  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:42.526754  604010 cri.go:89] found id: ""
	I1213 11:57:42.526784  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.526793  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:42.526800  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:42.526866  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:42.557909  604010 cri.go:89] found id: ""
	I1213 11:57:42.557938  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.557948  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:42.557955  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:42.558012  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:42.583283  604010 cri.go:89] found id: ""
	I1213 11:57:42.583311  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.583319  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:42.583325  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:42.583417  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:42.612201  604010 cri.go:89] found id: ""
	I1213 11:57:42.612228  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.612238  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:42.612244  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:42.612304  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:42.636897  604010 cri.go:89] found id: ""
	I1213 11:57:42.636926  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.636935  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:42.636942  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:42.637003  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:42.662077  604010 cri.go:89] found id: ""
	I1213 11:57:42.662101  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.662109  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:42.662116  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:42.662181  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:42.689090  604010 cri.go:89] found id: ""
	I1213 11:57:42.689117  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.689126  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:42.689132  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:42.689194  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:42.714186  604010 cri.go:89] found id: ""
	I1213 11:57:42.714220  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.714229  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:42.714239  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:42.714253  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:42.730012  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:42.730043  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:42.793528  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:42.784227   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.785106   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.787066   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.787860   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.789513   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:42.784227   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.785106   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.787066   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.787860   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.789513   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:42.793550  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:42.793562  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:42.820504  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:42.820540  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:42.850739  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:42.850772  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:45.416253  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:45.428104  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:45.428174  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:45.486919  604010 cri.go:89] found id: ""
	I1213 11:57:45.486943  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.486952  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:45.486959  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:45.487018  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:45.518438  604010 cri.go:89] found id: ""
	I1213 11:57:45.518466  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.518475  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:45.518482  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:45.518539  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:45.543147  604010 cri.go:89] found id: ""
	I1213 11:57:45.543174  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.543183  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:45.543189  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:45.543247  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:45.568184  604010 cri.go:89] found id: ""
	I1213 11:57:45.568210  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.568219  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:45.568226  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:45.568283  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:45.597036  604010 cri.go:89] found id: ""
	I1213 11:57:45.597062  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.597072  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:45.597078  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:45.597140  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:45.625538  604010 cri.go:89] found id: ""
	I1213 11:57:45.625563  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.625572  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:45.625579  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:45.625664  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:45.650305  604010 cri.go:89] found id: ""
	I1213 11:57:45.650340  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.650350  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:45.650356  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:45.650415  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:45.674642  604010 cri.go:89] found id: ""
	I1213 11:57:45.674668  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.674677  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:45.674723  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:45.674736  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:45.737984  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:45.729194   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.729808   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.731387   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.731876   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.733423   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:45.729194   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.729808   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.731387   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.731876   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.733423   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:45.738014  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:45.738030  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:45.764253  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:45.764293  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:45.794872  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:45.794900  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:45.852148  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:45.852181  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:48.369680  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:48.381452  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:48.381527  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:48.406963  604010 cri.go:89] found id: ""
	I1213 11:57:48.406989  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.406998  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:48.407004  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:48.407069  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:48.453016  604010 cri.go:89] found id: ""
	I1213 11:57:48.453043  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.453052  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:48.453060  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:48.453120  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:48.512775  604010 cri.go:89] found id: ""
	I1213 11:57:48.512806  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.512815  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:48.512821  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:48.512879  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:48.538032  604010 cri.go:89] found id: ""
	I1213 11:57:48.538055  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.538064  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:48.538070  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:48.538129  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:48.562781  604010 cri.go:89] found id: ""
	I1213 11:57:48.562815  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.562831  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:48.562841  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:48.562899  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:48.592224  604010 cri.go:89] found id: ""
	I1213 11:57:48.592249  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.592258  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:48.592265  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:48.592324  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:48.616499  604010 cri.go:89] found id: ""
	I1213 11:57:48.616524  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.616533  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:48.616540  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:48.616604  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:48.641140  604010 cri.go:89] found id: ""
	I1213 11:57:48.641164  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.641173  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:48.641183  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:48.641193  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:48.667031  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:48.667069  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:48.696402  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:48.696431  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:48.752046  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:48.752080  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:48.768352  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:48.768382  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:48.835752  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:48.828038   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.828514   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.830127   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.830542   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.831979   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:48.828038   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.828514   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.830127   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.830542   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.831979   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:51.337160  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:51.349596  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:51.349697  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:51.384310  604010 cri.go:89] found id: ""
	I1213 11:57:51.384341  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.384350  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:51.384358  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:51.384415  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:51.409502  604010 cri.go:89] found id: ""
	I1213 11:57:51.409523  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.409532  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:51.409539  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:51.409595  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:51.444866  604010 cri.go:89] found id: ""
	I1213 11:57:51.444887  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.444896  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:51.444901  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:51.444957  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:51.498878  604010 cri.go:89] found id: ""
	I1213 11:57:51.498900  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.498908  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:51.498915  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:51.498970  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:51.532054  604010 cri.go:89] found id: ""
	I1213 11:57:51.532082  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.532091  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:51.532098  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:51.532159  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:51.561798  604010 cri.go:89] found id: ""
	I1213 11:57:51.561833  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.561842  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:51.561849  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:51.561906  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:51.586723  604010 cri.go:89] found id: ""
	I1213 11:57:51.586798  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.586820  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:51.586843  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:51.586951  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:51.612513  604010 cri.go:89] found id: ""
	I1213 11:57:51.612538  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.612547  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:51.612557  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:51.612569  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:51.628622  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:51.628650  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:51.699783  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:51.691237   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.691797   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.693193   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.693944   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.695717   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:51.691237   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.691797   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.693193   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.693944   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.695717   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:51.699815  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:51.699832  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:51.725055  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:51.725092  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:51.758574  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:51.758604  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:54.315140  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:54.325600  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:54.325693  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:54.352056  604010 cri.go:89] found id: ""
	I1213 11:57:54.352081  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.352089  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:54.352096  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:54.352157  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:54.375586  604010 cri.go:89] found id: ""
	I1213 11:57:54.375611  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.375620  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:54.375626  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:54.375683  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:54.399138  604010 cri.go:89] found id: ""
	I1213 11:57:54.399163  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.399172  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:54.399178  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:54.399234  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:54.439999  604010 cri.go:89] found id: ""
	I1213 11:57:54.440025  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.440033  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:54.440039  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:54.440096  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:54.505093  604010 cri.go:89] found id: ""
	I1213 11:57:54.505124  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.505133  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:54.505140  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:54.505198  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:54.529921  604010 cri.go:89] found id: ""
	I1213 11:57:54.529947  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.529956  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:54.529966  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:54.530029  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:54.556363  604010 cri.go:89] found id: ""
	I1213 11:57:54.556390  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.556399  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:54.556406  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:54.556483  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:54.581531  604010 cri.go:89] found id: ""
	I1213 11:57:54.581556  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.581565  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:54.581574  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:54.581603  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:54.637009  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:54.637043  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:54.652919  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:54.652949  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:54.717113  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:54.708684   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.709580   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.711317   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.711640   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.713137   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:54.708684   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.709580   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.711317   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.711640   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.713137   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:54.717134  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:54.717148  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:54.743116  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:54.743151  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:57.272010  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:57.285875  604010 out.go:203] 
	W1213 11:57:57.288788  604010 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 11:57:57.288838  604010 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 11:57:57.288853  604010 out.go:285] * Related issues:
	W1213 11:57:57.288872  604010 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1213 11:57:57.288889  604010 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1213 11:57:57.291728  604010 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355817742Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355832504Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355869739Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355890810Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355900722Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355913464Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355922515Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355936029Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355951643Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355983734Z" level=info msg="Connect containerd service"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.356248656Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.356827911Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.372443055Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.372505251Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.372539417Z" level=info msg="Start subscribing containerd event"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.372587426Z" level=info msg="Start recovering state"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.413846470Z" level=info msg="Start event monitor"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.413904095Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.413916928Z" level=info msg="Start streaming server"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.413926332Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.413934643Z" level=info msg="runtime interface starting up..."
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.413940961Z" level=info msg="starting plugins..."
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.413972059Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 11:51:54 newest-cni-796924 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.415701136Z" level=info msg="containerd successfully booted in 0.081179s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:58:00.723484   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:58:00.724006   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:58:00.725658   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:58:00.726545   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:58:00.728134   13429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +12.907693] overlayfs: idmapped layers are currently not supported
	[Dec13 09:25] overlayfs: idmapped layers are currently not supported
	[ +26.192425] overlayfs: idmapped layers are currently not supported
	[Dec13 09:26] overlayfs: idmapped layers are currently not supported
	[ +25.729788] overlayfs: idmapped layers are currently not supported
	[Dec13 09:27] overlayfs: idmapped layers are currently not supported
	[Dec13 09:28] overlayfs: idmapped layers are currently not supported
	[Dec13 09:31] overlayfs: idmapped layers are currently not supported
	[Dec13 09:32] overlayfs: idmapped layers are currently not supported
	[ +17.979093] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec13 09:43] overlayfs: idmapped layers are currently not supported
	[Dec13 09:45] overlayfs: idmapped layers are currently not supported
	[ +25.885102] overlayfs: idmapped layers are currently not supported
	[Dec13 09:46] overlayfs: idmapped layers are currently not supported
	[ +22.078149] overlayfs: idmapped layers are currently not supported
	[Dec13 09:47] overlayfs: idmapped layers are currently not supported
	[Dec13 09:48] overlayfs: idmapped layers are currently not supported
	[Dec13 09:49] overlayfs: idmapped layers are currently not supported
	[Dec13 09:51] overlayfs: idmapped layers are currently not supported
	[ +17.043564] overlayfs: idmapped layers are currently not supported
	[Dec13 09:52] overlayfs: idmapped layers are currently not supported
	[Dec13 09:53] overlayfs: idmapped layers are currently not supported
	[Dec13 10:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:19] hrtimer: interrupt took 21247146 ns
	
	
	==> kernel <==
	 11:58:00 up  4:40,  0 user,  load average: 0.65, 0.86, 1.23
	Linux newest-cni-796924 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 11:57:57 newest-cni-796924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:57:58 newest-cni-796924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 13 11:57:58 newest-cni-796924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:57:58 newest-cni-796924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:57:58 newest-cni-796924 kubelet[13303]: E1213 11:57:58.268275   13303 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:57:58 newest-cni-796924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:57:58 newest-cni-796924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:57:58 newest-cni-796924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 13 11:57:58 newest-cni-796924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:57:58 newest-cni-796924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:57:59 newest-cni-796924 kubelet[13321]: E1213 11:57:59.034968   13321 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:57:59 newest-cni-796924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:57:59 newest-cni-796924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:57:59 newest-cni-796924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 13 11:57:59 newest-cni-796924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:57:59 newest-cni-796924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:57:59 newest-cni-796924 kubelet[13328]: E1213 11:57:59.783323   13328 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:57:59 newest-cni-796924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:57:59 newest-cni-796924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:58:00 newest-cni-796924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 13 11:58:00 newest-cni-796924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:58:00 newest-cni-796924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:58:00 newest-cni-796924 kubelet[13371]: E1213 11:58:00.526115   13371 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:58:00 newest-cni-796924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:58:00 newest-cni-796924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-796924 -n newest-cni-796924
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-796924 -n newest-cni-796924: exit status 2 (333.943185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-796924" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (374.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 11:54:47.365149  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 11:55:08.208754  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/old-k8s-version-624185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 11:55:12.241119  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:55:12.403793  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/default-k8s-diff-port-191845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 11:56:31.272733  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/old-k8s-version-624185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 11:56:48.080749  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1213 11:59:07.010082  308915 config.go:182] Loaded profile config "auto-270721": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 11:59:47.365190  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 11:59:55.327842  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:00:08.209314  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/old-k8s-version-624185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:00:12.240573  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:00:12.403917  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/default-k8s-diff-port-191845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:01:10.439080  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:01:35.465786  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/default-k8s-diff-port-191845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:01:48.080565  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-333352 -n no-preload-333352
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-333352 -n no-preload-333352: exit status 2 (513.888932ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-333352" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-333352
helpers_test.go:244: (dbg) docker inspect no-preload-333352:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db",
	        "Created": "2025-12-13T11:36:44.52795509Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 597136,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:46:48.212033137Z",
	            "FinishedAt": "2025-12-13T11:46:46.812235669Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/hostname",
	        "HostsPath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/hosts",
	        "LogPath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db-json.log",
	        "Name": "/no-preload-333352",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-333352:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-333352",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db",
	                "LowerDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891-init/diff:/var/lib/docker/overlay2/5d0ca86320f2c06765995d644a73af30cd6ddb36f5c1a2a6b1ebb24af53cdabd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-333352",
	                "Source": "/var/lib/docker/volumes/no-preload-333352/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-333352",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-333352",
	                "name.minikube.sigs.k8s.io": "no-preload-333352",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "368f444acead1313629634c955e38e7aa3bb1a58261aa4f155fef5ab3cc6d2d9",
	            "SandboxKey": "/var/run/docker/netns/368f444acead",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-333352": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:92:40:ad:16:f6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ee20fc50f482b31273047147a2f419c36704bb98933537d0ac5901a560402043",
	                    "EndpointID": "c1aa6ce135257fa89e5e51421f21414b58021c38959e96fd72756c63a958cfdd",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-333352",
	                        "ca124efb8aeb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-333352 -n no-preload-333352
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-333352 -n no-preload-333352: exit status 2 (416.556387ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-333352 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-333352 logs -n 25: (1.008784948s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                     ARGS                                                                     │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p flannel-270721 sudo systemctl status kubelet --all --full --no-pager                                                                      │ flannel-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │ 13 Dec 25 12:01 UTC │
	│ ssh     │ -p flannel-270721 sudo systemctl cat kubelet --no-pager                                                                                      │ flannel-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │ 13 Dec 25 12:01 UTC │
	│ ssh     │ -p flannel-270721 sudo journalctl -xeu kubelet --all --full --no-pager                                                                       │ flannel-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │ 13 Dec 25 12:01 UTC │
	│ ssh     │ -p flannel-270721 sudo cat /etc/kubernetes/kubelet.conf                                                                                      │ flannel-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │ 13 Dec 25 12:01 UTC │
	│ ssh     │ -p flannel-270721 sudo cat /var/lib/kubelet/config.yaml                                                                                      │ flannel-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │ 13 Dec 25 12:01 UTC │
	│ ssh     │ -p flannel-270721 sudo systemctl status docker --all --full --no-pager                                                                       │ flannel-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │                     │
	│ ssh     │ -p flannel-270721 sudo systemctl cat docker --no-pager                                                                                       │ flannel-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │ 13 Dec 25 12:01 UTC │
	│ ssh     │ -p flannel-270721 sudo cat /etc/docker/daemon.json                                                                                           │ flannel-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │                     │
	│ ssh     │ -p flannel-270721 sudo docker system info                                                                                                    │ flannel-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │                     │
	│ ssh     │ -p flannel-270721 sudo systemctl status cri-docker --all --full --no-pager                                                                   │ flannel-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │                     │
	│ ssh     │ -p flannel-270721 sudo systemctl cat cri-docker --no-pager                                                                                   │ flannel-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │ 13 Dec 25 12:01 UTC │
	│ ssh     │ -p flannel-270721 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                              │ flannel-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │                     │
	│ ssh     │ -p flannel-270721 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                        │ flannel-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │ 13 Dec 25 12:01 UTC │
	│ ssh     │ -p flannel-270721 sudo cri-dockerd --version                                                                                                 │ flannel-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │ 13 Dec 25 12:01 UTC │
	│ ssh     │ -p flannel-270721 sudo systemctl status containerd --all --full --no-pager                                                                   │ flannel-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │ 13 Dec 25 12:01 UTC │
	│ ssh     │ -p flannel-270721 sudo systemctl cat containerd --no-pager                                                                                   │ flannel-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │ 13 Dec 25 12:01 UTC │
	│ ssh     │ -p flannel-270721 sudo cat /lib/systemd/system/containerd.service                                                                            │ flannel-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │ 13 Dec 25 12:01 UTC │
	│ ssh     │ -p flannel-270721 sudo cat /etc/containerd/config.toml                                                                                       │ flannel-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │ 13 Dec 25 12:01 UTC │
	│ ssh     │ -p flannel-270721 sudo containerd config dump                                                                                                │ flannel-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │ 13 Dec 25 12:01 UTC │
	│ ssh     │ -p flannel-270721 sudo systemctl status crio --all --full --no-pager                                                                         │ flannel-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │                     │
	│ ssh     │ -p flannel-270721 sudo systemctl cat crio --no-pager                                                                                         │ flannel-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │ 13 Dec 25 12:01 UTC │
	│ ssh     │ -p flannel-270721 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                               │ flannel-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │ 13 Dec 25 12:01 UTC │
	│ ssh     │ -p flannel-270721 sudo crio config                                                                                                           │ flannel-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │ 13 Dec 25 12:01 UTC │
	│ delete  │ -p flannel-270721                                                                                                                            │ flannel-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │ 13 Dec 25 12:01 UTC │
	│ start   │ -p calico-270721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd │ calico-270721  │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 12:01:11
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 12:01:11.637212  636421 out.go:360] Setting OutFile to fd 1 ...
	I1213 12:01:11.637369  636421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 12:01:11.637382  636421 out.go:374] Setting ErrFile to fd 2...
	I1213 12:01:11.637389  636421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 12:01:11.637794  636421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 12:01:11.638369  636421 out.go:368] Setting JSON to false
	I1213 12:01:11.639383  636421 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":17025,"bootTime":1765610247,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 12:01:11.639491  636421 start.go:143] virtualization:  
	I1213 12:01:11.643616  636421 out.go:179] * [calico-270721] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 12:01:11.648178  636421 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 12:01:11.648264  636421 notify.go:221] Checking for updates...
	I1213 12:01:11.654665  636421 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 12:01:11.658003  636421 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 12:01:11.663413  636421 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 12:01:11.666511  636421 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 12:01:11.669513  636421 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 12:01:11.673947  636421 config.go:182] Loaded profile config "no-preload-333352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 12:01:11.674056  636421 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 12:01:11.714172  636421 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 12:01:11.714305  636421 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 12:01:11.776496  636421 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 12:01:11.767017119 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 12:01:11.776604  636421 docker.go:319] overlay module found
	I1213 12:01:11.779880  636421 out.go:179] * Using the docker driver based on user configuration
	I1213 12:01:11.782933  636421 start.go:309] selected driver: docker
	I1213 12:01:11.782960  636421 start.go:927] validating driver "docker" against <nil>
	I1213 12:01:11.782976  636421 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 12:01:11.783728  636421 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 12:01:11.837827  636421 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 12:01:11.826963946 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 12:01:11.837983  636421 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 12:01:11.838209  636421 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 12:01:11.841374  636421 out.go:179] * Using Docker driver with root privileges
	I1213 12:01:11.844291  636421 cni.go:84] Creating CNI manager for "calico"
	I1213 12:01:11.844323  636421 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1213 12:01:11.844437  636421 start.go:353] cluster config:
	{Name:calico-270721 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-270721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:01:11.849530  636421 out.go:179] * Starting "calico-270721" primary control-plane node in "calico-270721" cluster
	I1213 12:01:11.852388  636421 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 12:01:11.855434  636421 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 12:01:11.858350  636421 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 12:01:11.858412  636421 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1213 12:01:11.858427  636421 cache.go:65] Caching tarball of preloaded images
	I1213 12:01:11.858444  636421 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 12:01:11.858558  636421 preload.go:238] Found /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 12:01:11.858570  636421 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1213 12:01:11.858732  636421 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/config.json ...
	I1213 12:01:11.858758  636421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/config.json: {Name:mk87cc2dfc2d51b23568811321ef8c4bce822a8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:01:11.876665  636421 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 12:01:11.876685  636421 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 12:01:11.876709  636421 cache.go:243] Successfully downloaded all kic artifacts
	I1213 12:01:11.876744  636421 start.go:360] acquireMachinesLock for calico-270721: {Name:mk4e6b3be20ae764cb0c90faac7d4cdcbe8ce6b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:01:11.876852  636421 start.go:364] duration metric: took 92.572µs to acquireMachinesLock for "calico-270721"
	I1213 12:01:11.876879  636421 start.go:93] Provisioning new machine with config: &{Name:calico-270721 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-270721 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 12:01:11.876944  636421 start.go:125] createHost starting for "" (driver="docker")
	I1213 12:01:11.880515  636421 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 12:01:11.880779  636421 start.go:159] libmachine.API.Create for "calico-270721" (driver="docker")
	I1213 12:01:11.880821  636421 client.go:173] LocalClient.Create starting
	I1213 12:01:11.880919  636421 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem
	I1213 12:01:11.880958  636421 main.go:143] libmachine: Decoding PEM data...
	I1213 12:01:11.880978  636421 main.go:143] libmachine: Parsing certificate...
	I1213 12:01:11.881040  636421 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem
	I1213 12:01:11.881070  636421 main.go:143] libmachine: Decoding PEM data...
	I1213 12:01:11.881086  636421 main.go:143] libmachine: Parsing certificate...
	I1213 12:01:11.881459  636421 cli_runner.go:164] Run: docker network inspect calico-270721 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 12:01:11.897985  636421 cli_runner.go:211] docker network inspect calico-270721 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 12:01:11.898068  636421 network_create.go:284] running [docker network inspect calico-270721] to gather additional debugging logs...
	I1213 12:01:11.898092  636421 cli_runner.go:164] Run: docker network inspect calico-270721
	W1213 12:01:11.914180  636421 cli_runner.go:211] docker network inspect calico-270721 returned with exit code 1
	I1213 12:01:11.914212  636421 network_create.go:287] error running [docker network inspect calico-270721]: docker network inspect calico-270721: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-270721 not found
	I1213 12:01:11.914226  636421 network_create.go:289] output of [docker network inspect calico-270721]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-270721 not found
	
	** /stderr **
	I1213 12:01:11.914316  636421 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 12:01:11.931133  636421 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-381e4ce3c9ab IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:2d:23:57:0e:cc} reservation:<nil>}
	I1213 12:01:11.931554  636421 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bd1082d121b0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7a:42:ce:41:ea:ae} reservation:<nil>}
	I1213 12:01:11.931950  636421 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ebeb7162e340 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:cf:aa:41:ac:19} reservation:<nil>}
	I1213 12:01:11.932419  636421 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a05120}
	I1213 12:01:11.932443  636421 network_create.go:124] attempt to create docker network calico-270721 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 12:01:11.932498  636421 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-270721 calico-270721
	I1213 12:01:11.998656  636421 network_create.go:108] docker network calico-270721 192.168.76.0/24 created
	I1213 12:01:11.998726  636421 kic.go:121] calculated static IP "192.168.76.2" for the "calico-270721" container
	I1213 12:01:11.998812  636421 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 12:01:12.018760  636421 cli_runner.go:164] Run: docker volume create calico-270721 --label name.minikube.sigs.k8s.io=calico-270721 --label created_by.minikube.sigs.k8s.io=true
	I1213 12:01:12.038458  636421 oci.go:103] Successfully created a docker volume calico-270721
	I1213 12:01:12.038548  636421 cli_runner.go:164] Run: docker run --rm --name calico-270721-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-270721 --entrypoint /usr/bin/test -v calico-270721:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 12:01:12.594935  636421 oci.go:107] Successfully prepared a docker volume calico-270721
	I1213 12:01:12.595007  636421 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 12:01:12.595022  636421 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 12:01:12.595087  636421 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v calico-270721:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 12:01:16.611317  636421 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v calico-270721:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.016176256s)
	I1213 12:01:16.611351  636421 kic.go:203] duration metric: took 4.016326025s to extract preloaded images to volume ...
	W1213 12:01:16.611487  636421 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 12:01:16.611614  636421 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 12:01:16.668442  636421 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-270721 --name calico-270721 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-270721 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-270721 --network calico-270721 --ip 192.168.76.2 --volume calico-270721:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 12:01:16.995621  636421 cli_runner.go:164] Run: docker container inspect calico-270721 --format={{.State.Running}}
	I1213 12:01:17.021415  636421 cli_runner.go:164] Run: docker container inspect calico-270721 --format={{.State.Status}}
	I1213 12:01:17.051057  636421 cli_runner.go:164] Run: docker exec calico-270721 stat /var/lib/dpkg/alternatives/iptables
	I1213 12:01:17.103961  636421 oci.go:144] the created container "calico-270721" has a running status.
	I1213 12:01:17.103994  636421 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/calico-270721/id_rsa...
	I1213 12:01:17.654966  636421 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-307042/.minikube/machines/calico-270721/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 12:01:17.687277  636421 cli_runner.go:164] Run: docker container inspect calico-270721 --format={{.State.Status}}
	I1213 12:01:17.713876  636421 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 12:01:17.713901  636421 kic_runner.go:114] Args: [docker exec --privileged calico-270721 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 12:01:17.758419  636421 cli_runner.go:164] Run: docker container inspect calico-270721 --format={{.State.Status}}
	I1213 12:01:17.775803  636421 machine.go:94] provisionDockerMachine start ...
	I1213 12:01:17.775912  636421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-270721
	I1213 12:01:17.792625  636421 main.go:143] libmachine: Using SSH client type: native
	I1213 12:01:17.792984  636421 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1213 12:01:17.793002  636421 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 12:01:17.793706  636421 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 12:01:20.942379  636421 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-270721
	
	I1213 12:01:20.942410  636421 ubuntu.go:182] provisioning hostname "calico-270721"
	I1213 12:01:20.942480  636421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-270721
	I1213 12:01:20.960510  636421 main.go:143] libmachine: Using SSH client type: native
	I1213 12:01:20.960820  636421 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1213 12:01:20.960836  636421 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-270721 && echo "calico-270721" | sudo tee /etc/hostname
	I1213 12:01:21.128594  636421 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-270721
	
	I1213 12:01:21.128738  636421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-270721
	I1213 12:01:21.146853  636421 main.go:143] libmachine: Using SSH client type: native
	I1213 12:01:21.147165  636421 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1213 12:01:21.147188  636421 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-270721' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-270721/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-270721' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 12:01:21.303156  636421 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 12:01:21.303188  636421 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 12:01:21.303247  636421 ubuntu.go:190] setting up certificates
	I1213 12:01:21.303258  636421 provision.go:84] configureAuth start
	I1213 12:01:21.303347  636421 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-270721
	I1213 12:01:21.323582  636421 provision.go:143] copyHostCerts
	I1213 12:01:21.323649  636421 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 12:01:21.323664  636421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 12:01:21.323741  636421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 12:01:21.323839  636421 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 12:01:21.323850  636421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 12:01:21.323878  636421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 12:01:21.323934  636421 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 12:01:21.323943  636421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 12:01:21.323966  636421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 12:01:21.324016  636421 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.calico-270721 san=[127.0.0.1 192.168.76.2 calico-270721 localhost minikube]
	I1213 12:01:21.547002  636421 provision.go:177] copyRemoteCerts
	I1213 12:01:21.547072  636421 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 12:01:21.547112  636421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-270721
	I1213 12:01:21.565508  636421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/calico-270721/id_rsa Username:docker}
	I1213 12:01:21.671247  636421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 12:01:21.689739  636421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 12:01:21.708847  636421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 12:01:21.727638  636421 provision.go:87] duration metric: took 424.350425ms to configureAuth
	I1213 12:01:21.727683  636421 ubuntu.go:206] setting minikube options for container-runtime
	I1213 12:01:21.727889  636421 config.go:182] Loaded profile config "calico-270721": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 12:01:21.727903  636421 machine.go:97] duration metric: took 3.952079001s to provisionDockerMachine
	I1213 12:01:21.727910  636421 client.go:176] duration metric: took 9.847079759s to LocalClient.Create
	I1213 12:01:21.727925  636421 start.go:167] duration metric: took 9.847147739s to libmachine.API.Create "calico-270721"
	I1213 12:01:21.727935  636421 start.go:293] postStartSetup for "calico-270721" (driver="docker")
	I1213 12:01:21.727944  636421 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 12:01:21.728006  636421 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 12:01:21.728087  636421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-270721
	I1213 12:01:21.745396  636421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/calico-270721/id_rsa Username:docker}
	I1213 12:01:21.851136  636421 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 12:01:21.854508  636421 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 12:01:21.854534  636421 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 12:01:21.854547  636421 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 12:01:21.854600  636421 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 12:01:21.854677  636421 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 12:01:21.854812  636421 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 12:01:21.862115  636421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 12:01:21.879955  636421 start.go:296] duration metric: took 152.004219ms for postStartSetup
	I1213 12:01:21.880334  636421 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-270721
	I1213 12:01:21.897329  636421 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/config.json ...
	I1213 12:01:21.897612  636421 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 12:01:21.897662  636421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-270721
	I1213 12:01:21.914385  636421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/calico-270721/id_rsa Username:docker}
	I1213 12:01:22.016437  636421 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 12:01:22.021619  636421 start.go:128] duration metric: took 10.144658818s to createHost
	I1213 12:01:22.021645  636421 start.go:83] releasing machines lock for "calico-270721", held for 10.144783242s
	I1213 12:01:22.021720  636421 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-270721
	I1213 12:01:22.039440  636421 ssh_runner.go:195] Run: cat /version.json
	I1213 12:01:22.039477  636421 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 12:01:22.039493  636421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-270721
	I1213 12:01:22.039535  636421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-270721
	I1213 12:01:22.064439  636421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/calico-270721/id_rsa Username:docker}
	I1213 12:01:22.064445  636421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/calico-270721/id_rsa Username:docker}
	I1213 12:01:22.171798  636421 ssh_runner.go:195] Run: systemctl --version
	I1213 12:01:22.268736  636421 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 12:01:22.273281  636421 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 12:01:22.273385  636421 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 12:01:22.301785  636421 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 12:01:22.301811  636421 start.go:496] detecting cgroup driver to use...
	I1213 12:01:22.301845  636421 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 12:01:22.301899  636421 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 12:01:22.318052  636421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 12:01:22.331541  636421 docker.go:218] disabling cri-docker service (if available) ...
	I1213 12:01:22.331617  636421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 12:01:22.350526  636421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 12:01:22.370887  636421 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 12:01:22.488702  636421 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 12:01:22.623255  636421 docker.go:234] disabling docker service ...
	I1213 12:01:22.623323  636421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 12:01:22.647207  636421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 12:01:22.662796  636421 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 12:01:22.792499  636421 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 12:01:22.942389  636421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 12:01:22.962952  636421 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 12:01:22.982550  636421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 12:01:22.992181  636421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 12:01:23.008478  636421 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 12:01:23.008629  636421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 12:01:23.018548  636421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 12:01:23.027970  636421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 12:01:23.037268  636421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 12:01:23.046501  636421 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 12:01:23.054976  636421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 12:01:23.064296  636421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 12:01:23.074099  636421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 12:01:23.083384  636421 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 12:01:23.091170  636421 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 12:01:23.098767  636421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:01:23.220603  636421 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 12:01:23.347832  636421 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 12:01:23.347925  636421 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 12:01:23.352139  636421 start.go:564] Will wait 60s for crictl version
	I1213 12:01:23.352214  636421 ssh_runner.go:195] Run: which crictl
	I1213 12:01:23.356059  636421 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 12:01:23.384651  636421 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 12:01:23.384739  636421 ssh_runner.go:195] Run: containerd --version
	I1213 12:01:23.405580  636421 ssh_runner.go:195] Run: containerd --version
	I1213 12:01:23.433010  636421 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1213 12:01:23.435967  636421 cli_runner.go:164] Run: docker network inspect calico-270721 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 12:01:23.453352  636421 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 12:01:23.457481  636421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 12:01:23.467588  636421 kubeadm.go:884] updating cluster {Name:calico-270721 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-270721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 12:01:23.467703  636421 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 12:01:23.467788  636421 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 12:01:23.493999  636421 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 12:01:23.494027  636421 containerd.go:534] Images already preloaded, skipping extraction
	I1213 12:01:23.494091  636421 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 12:01:23.520075  636421 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 12:01:23.520101  636421 cache_images.go:86] Images are preloaded, skipping loading
	I1213 12:01:23.520109  636421 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 containerd true true} ...
	I1213 12:01:23.520206  636421 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-270721 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:calico-270721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1213 12:01:23.520274  636421 ssh_runner.go:195] Run: sudo crictl info
	I1213 12:01:23.546160  636421 cni.go:84] Creating CNI manager for "calico"
	I1213 12:01:23.546205  636421 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 12:01:23.546238  636421 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-270721 NodeName:calico-270721 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 12:01:23.546358  636421 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "calico-270721"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 12:01:23.546437  636421 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 12:01:23.555167  636421 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 12:01:23.555263  636421 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 12:01:23.563629  636421 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1213 12:01:23.577807  636421 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 12:01:23.592306  636421 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1213 12:01:23.606200  636421 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 12:01:23.611715  636421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 12:01:23.622142  636421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:01:23.755475  636421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 12:01:23.774055  636421 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721 for IP: 192.168.76.2
	I1213 12:01:23.774123  636421 certs.go:195] generating shared ca certs ...
	I1213 12:01:23.774154  636421 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:01:23.774347  636421 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 12:01:23.774427  636421 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 12:01:23.774465  636421 certs.go:257] generating profile certs ...
	I1213 12:01:23.774553  636421 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/client.key
	I1213 12:01:23.774599  636421 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/client.crt with IP's: []
	I1213 12:01:23.953271  636421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/client.crt ...
	I1213 12:01:23.953304  636421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/client.crt: {Name:mkff0baa2a290ed6e9f7a15bda4ca5b367ae9538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:01:23.953502  636421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/client.key ...
	I1213 12:01:23.953516  636421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/client.key: {Name:mk9d27b37d8298ee2581fb950da70a5512cd5d70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:01:23.953609  636421 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/apiserver.key.92db897c
	I1213 12:01:23.953626  636421 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/apiserver.crt.92db897c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1213 12:01:24.394223  636421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/apiserver.crt.92db897c ...
	I1213 12:01:24.394255  636421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/apiserver.crt.92db897c: {Name:mk6e26bc566e427e2d63e4aa68826e57f3c82fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:01:24.394442  636421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/apiserver.key.92db897c ...
	I1213 12:01:24.394458  636421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/apiserver.key.92db897c: {Name:mk6fc5460e699e127c2ebfb6d759edfaeae32f0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:01:24.394546  636421 certs.go:382] copying /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/apiserver.crt.92db897c -> /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/apiserver.crt
	I1213 12:01:24.394630  636421 certs.go:386] copying /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/apiserver.key.92db897c -> /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/apiserver.key
	I1213 12:01:24.394712  636421 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/proxy-client.key
	I1213 12:01:24.394731  636421 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/proxy-client.crt with IP's: []
	I1213 12:01:24.679804  636421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/proxy-client.crt ...
	I1213 12:01:24.679850  636421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/proxy-client.crt: {Name:mk58fc59cf67d9631537c550683df54091de83e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:01:24.680036  636421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/proxy-client.key ...
	I1213 12:01:24.680053  636421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/proxy-client.key: {Name:mkd6952946271ecb77df947579c8ac60783a1245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:01:24.680239  636421 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 12:01:24.680288  636421 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 12:01:24.680301  636421 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 12:01:24.680329  636421 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 12:01:24.680360  636421 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 12:01:24.680388  636421 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 12:01:24.680445  636421 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 12:01:24.681038  636421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 12:01:24.699920  636421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 12:01:24.719325  636421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 12:01:24.738491  636421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 12:01:24.757878  636421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 12:01:24.777965  636421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 12:01:24.796555  636421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 12:01:24.815059  636421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 12:01:24.833352  636421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 12:01:24.852224  636421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 12:01:24.872907  636421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 12:01:24.890977  636421 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 12:01:24.904930  636421 ssh_runner.go:195] Run: openssl version
	I1213 12:01:24.911364  636421 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 12:01:24.919267  636421 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 12:01:24.927180  636421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 12:01:24.931315  636421 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 12:01:24.931385  636421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 12:01:24.972813  636421 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 12:01:24.980834  636421 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/308915.pem /etc/ssl/certs/51391683.0
	I1213 12:01:24.988540  636421 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 12:01:24.996362  636421 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 12:01:25.013480  636421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 12:01:25.018062  636421 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 12:01:25.018159  636421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 12:01:25.060510  636421 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 12:01:25.068792  636421 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3089152.pem /etc/ssl/certs/3ec20f2e.0
	I1213 12:01:25.077134  636421 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:01:25.085286  636421 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 12:01:25.093833  636421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:01:25.098184  636421 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:01:25.098254  636421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:01:25.141860  636421 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 12:01:25.150184  636421 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 12:01:25.158475  636421 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 12:01:25.163776  636421 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 12:01:25.163843  636421 kubeadm.go:401] StartCluster: {Name:calico-270721 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-270721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:01:25.163921  636421 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 12:01:25.163985  636421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 12:01:25.195663  636421 cri.go:89] found id: ""
	I1213 12:01:25.195743  636421 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 12:01:25.206323  636421 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 12:01:25.215094  636421 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 12:01:25.215192  636421 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 12:01:25.229170  636421 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 12:01:25.229193  636421 kubeadm.go:158] found existing configuration files:
	
	I1213 12:01:25.229250  636421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 12:01:25.237963  636421 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 12:01:25.238050  636421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 12:01:25.246223  636421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 12:01:25.254514  636421 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 12:01:25.254606  636421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 12:01:25.262799  636421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 12:01:25.270944  636421 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 12:01:25.271041  636421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 12:01:25.279249  636421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 12:01:25.287463  636421 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 12:01:25.287576  636421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 12:01:25.295386  636421 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 12:01:25.337390  636421 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 12:01:25.337459  636421 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 12:01:25.375658  636421 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 12:01:25.375738  636421 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 12:01:25.375781  636421 kubeadm.go:319] OS: Linux
	I1213 12:01:25.375832  636421 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 12:01:25.375884  636421 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 12:01:25.375935  636421 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 12:01:25.375989  636421 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 12:01:25.376041  636421 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 12:01:25.376093  636421 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 12:01:25.376147  636421 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 12:01:25.376200  636421 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 12:01:25.376251  636421 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 12:01:25.461266  636421 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 12:01:25.461464  636421 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 12:01:25.461617  636421 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 12:01:25.472777  636421 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 12:01:25.479734  636421 out.go:252]   - Generating certificates and keys ...
	I1213 12:01:25.479838  636421 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 12:01:25.479914  636421 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 12:01:25.617720  636421 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 12:01:25.924270  636421 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 12:01:26.270040  636421 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 12:01:26.808344  636421 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 12:01:27.206749  636421 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 12:01:27.207004  636421 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-270721 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 12:01:27.600510  636421 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 12:01:27.600714  636421 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-270721 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 12:01:27.670858  636421 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 12:01:28.830086  636421 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 12:01:28.929210  636421 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 12:01:28.935037  636421 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 12:01:29.301603  636421 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 12:01:30.427951  636421 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 12:01:31.081726  636421 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 12:01:32.016185  636421 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 12:01:32.377394  636421 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 12:01:32.378185  636421 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 12:01:32.381556  636421 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 12:01:32.385068  636421 out.go:252]   - Booting up control plane ...
	I1213 12:01:32.385174  636421 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 12:01:32.385259  636421 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 12:01:32.386122  636421 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 12:01:32.403076  636421 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 12:01:32.403186  636421 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 12:01:32.410773  636421 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 12:01:32.411220  636421 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 12:01:32.411500  636421 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 12:01:32.557065  636421 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 12:01:32.557186  636421 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 12:01:34.559085  636421 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001720472s
	I1213 12:01:34.562242  636421 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 12:01:34.562343  636421 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1213 12:01:34.562438  636421 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 12:01:34.562520  636421 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 12:01:38.992422  636421 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.429117343s
	I1213 12:01:39.947316  636421 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.385054999s
	I1213 12:01:41.564621  636421 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.002314368s
	I1213 12:01:41.603290  636421 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 12:01:41.622468  636421 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 12:01:41.638829  636421 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 12:01:41.639069  636421 kubeadm.go:319] [mark-control-plane] Marking the node calico-270721 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 12:01:41.652234  636421 kubeadm.go:319] [bootstrap-token] Using token: v4cr7b.rsnmgs9u04dzvwfh
	I1213 12:01:41.655221  636421 out.go:252]   - Configuring RBAC rules ...
	I1213 12:01:41.655366  636421 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 12:01:41.667091  636421 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 12:01:41.680229  636421 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 12:01:41.685086  636421 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 12:01:41.691405  636421 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 12:01:41.700258  636421 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 12:01:41.971577  636421 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 12:01:42.398265  636421 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 12:01:42.970820  636421 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 12:01:42.972027  636421 kubeadm.go:319] 
	I1213 12:01:42.972097  636421 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 12:01:42.972102  636421 kubeadm.go:319] 
	I1213 12:01:42.972179  636421 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 12:01:42.972183  636421 kubeadm.go:319] 
	I1213 12:01:42.972208  636421 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 12:01:42.972266  636421 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 12:01:42.972316  636421 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 12:01:42.972322  636421 kubeadm.go:319] 
	I1213 12:01:42.972376  636421 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 12:01:42.972380  636421 kubeadm.go:319] 
	I1213 12:01:42.972427  636421 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 12:01:42.972431  636421 kubeadm.go:319] 
	I1213 12:01:42.972482  636421 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 12:01:42.972557  636421 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 12:01:42.972625  636421 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 12:01:42.972629  636421 kubeadm.go:319] 
	I1213 12:01:42.972713  636421 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 12:01:42.972801  636421 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 12:01:42.972805  636421 kubeadm.go:319] 
	I1213 12:01:42.972896  636421 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token v4cr7b.rsnmgs9u04dzvwfh \
	I1213 12:01:42.973001  636421 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2b5aae63f59669f0b4e3ed658fbdddeef7a3996ea2c8f22710210607dc196205 \
	I1213 12:01:42.973021  636421 kubeadm.go:319] 	--control-plane 
	I1213 12:01:42.973025  636421 kubeadm.go:319] 
	I1213 12:01:42.973109  636421 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 12:01:42.973113  636421 kubeadm.go:319] 
	I1213 12:01:42.973195  636421 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token v4cr7b.rsnmgs9u04dzvwfh \
	I1213 12:01:42.973297  636421 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2b5aae63f59669f0b4e3ed658fbdddeef7a3996ea2c8f22710210607dc196205 
	I1213 12:01:42.976437  636421 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1213 12:01:42.976668  636421 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 12:01:42.976775  636421 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 12:01:42.976794  636421 cni.go:84] Creating CNI manager for "calico"
	I1213 12:01:42.980034  636421 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1213 12:01:42.983055  636421 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 12:01:42.983117  636421 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (329943 bytes)
	I1213 12:01:43.003647  636421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 12:01:44.729293  636421 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.725606008s)
	I1213 12:01:44.729387  636421 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 12:01:44.729559  636421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:01:44.729703  636421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-270721 minikube.k8s.io/updated_at=2025_12_13T12_01_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b minikube.k8s.io/name=calico-270721 minikube.k8s.io/primary=true
	I1213 12:01:44.900619  636421 ops.go:34] apiserver oom_adj: -16
	I1213 12:01:44.900728  636421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:01:45.401828  636421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:01:45.901792  636421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:01:46.401178  636421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:01:46.901658  636421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:01:47.140343  636421 kubeadm.go:1114] duration metric: took 2.410843127s to wait for elevateKubeSystemPrivileges
	I1213 12:01:47.140371  636421 kubeadm.go:403] duration metric: took 21.976534078s to StartCluster
	I1213 12:01:47.140387  636421 settings.go:142] acquiring lock: {Name:mk079e9a25ebbc2c8fbae42d4c6ed096a652c00b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:01:47.140450  636421 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 12:01:47.141362  636421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:01:47.141561  636421 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 12:01:47.141687  636421 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 12:01:47.141949  636421 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 12:01:47.142034  636421 addons.go:70] Setting storage-provisioner=true in profile "calico-270721"
	I1213 12:01:47.142050  636421 addons.go:239] Setting addon storage-provisioner=true in "calico-270721"
	I1213 12:01:47.142079  636421 host.go:66] Checking if "calico-270721" exists ...
	I1213 12:01:47.142104  636421 config.go:182] Loaded profile config "calico-270721": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 12:01:47.142174  636421 addons.go:70] Setting default-storageclass=true in profile "calico-270721"
	I1213 12:01:47.142204  636421 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-270721"
	I1213 12:01:47.142563  636421 cli_runner.go:164] Run: docker container inspect calico-270721 --format={{.State.Status}}
	I1213 12:01:47.142835  636421 cli_runner.go:164] Run: docker container inspect calico-270721 --format={{.State.Status}}
	I1213 12:01:47.144685  636421 out.go:179] * Verifying Kubernetes components...
	I1213 12:01:47.147801  636421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:01:47.180365  636421 addons.go:239] Setting addon default-storageclass=true in "calico-270721"
	I1213 12:01:47.180409  636421 host.go:66] Checking if "calico-270721" exists ...
	I1213 12:01:47.180829  636421 cli_runner.go:164] Run: docker container inspect calico-270721 --format={{.State.Status}}
	I1213 12:01:47.193600  636421 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 12:01:47.196470  636421 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:01:47.196495  636421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 12:01:47.196568  636421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-270721
	I1213 12:01:47.220467  636421 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 12:01:47.220495  636421 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 12:01:47.220562  636421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-270721
	I1213 12:01:47.250828  636421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/calico-270721/id_rsa Username:docker}
	I1213 12:01:47.262352  636421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/calico-270721/id_rsa Username:docker}
	I1213 12:01:47.605510  636421 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 12:01:47.605611  636421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 12:01:47.649529  636421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:01:47.693387  636421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 12:01:48.568023  636421 node_ready.go:35] waiting up to 15m0s for node "calico-270721" to be "Ready" ...
	I1213 12:01:48.568906  636421 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1213 12:01:48.885264  636421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.23569922s)
	I1213 12:01:48.885317  636421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.191905651s)
	I1213 12:01:48.894917  636421 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1213 12:01:48.897839  636421 addons.go:530] duration metric: took 1.755884898s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1213 12:01:49.075411  636421 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-270721" context rescaled to 1 replicas
	W1213 12:01:50.571051  636421 node_ready.go:57] node "calico-270721" has "Ready":"False" status (will retry)
	W1213 12:01:52.572296  636421 node_ready.go:57] node "calico-270721" has "Ready":"False" status (will retry)
	I1213 12:01:53.570811  636421 node_ready.go:49] node "calico-270721" is "Ready"
	I1213 12:01:53.570841  636421 node_ready.go:38] duration metric: took 5.00278588s for node "calico-270721" to be "Ready" ...
	I1213 12:01:53.570854  636421 api_server.go:52] waiting for apiserver process to appear ...
	I1213 12:01:53.570943  636421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:01:53.584221  636421 api_server.go:72] duration metric: took 6.442631751s to wait for apiserver process to appear ...
	I1213 12:01:53.584248  636421 api_server.go:88] waiting for apiserver healthz status ...
	I1213 12:01:53.584288  636421 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 12:01:53.592594  636421 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1213 12:01:53.593903  636421 api_server.go:141] control plane version: v1.34.2
	I1213 12:01:53.593969  636421 api_server.go:131] duration metric: took 9.711996ms to wait for apiserver health ...
	I1213 12:01:53.593993  636421 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 12:01:53.598180  636421 system_pods.go:59] 9 kube-system pods found
	I1213 12:01:53.598229  636421 system_pods.go:61] "calico-kube-controllers-5c676f698c-ngwms" [a292d8e6-4b73-4bfa-b772-208e02d29275] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 12:01:53.598241  636421 system_pods.go:61] "calico-node-nhhx8" [962d4003-c8af-4d22-90ed-c717d0f14710] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 12:01:53.598250  636421 system_pods.go:61] "coredns-66bc5c9577-52bt2" [515faea1-62b2-4037-837f-7a29e65cb091] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:01:53.598259  636421 system_pods.go:61] "etcd-calico-270721" [56111c3e-cd9a-4080-906e-ee6a8d0ea025] Running
	I1213 12:01:53.598266  636421 system_pods.go:61] "kube-apiserver-calico-270721" [1555476b-b3ff-4597-8943-87a94bdfa587] Running
	I1213 12:01:53.598275  636421 system_pods.go:61] "kube-controller-manager-calico-270721" [87f52309-0bf1-44cd-b4d3-fbf2a8a47efd] Running
	I1213 12:01:53.598278  636421 system_pods.go:61] "kube-proxy-795cl" [61c72953-5b44-42a1-bd5b-3e58fcade8da] Running
	I1213 12:01:53.598282  636421 system_pods.go:61] "kube-scheduler-calico-270721" [cb79461a-efa8-492c-94fb-9dc3129b4461] Running
	I1213 12:01:53.598287  636421 system_pods.go:61] "storage-provisioner" [dbd68185-356e-4e5d-9975-22505963e2e6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 12:01:53.598303  636421 system_pods.go:74] duration metric: took 4.291037ms to wait for pod list to return data ...
	I1213 12:01:53.598311  636421 default_sa.go:34] waiting for default service account to be created ...
	I1213 12:01:53.604171  636421 default_sa.go:45] found service account: "default"
	I1213 12:01:53.604209  636421 default_sa.go:55] duration metric: took 5.891389ms for default service account to be created ...
	I1213 12:01:53.604236  636421 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 12:01:53.607775  636421 system_pods.go:86] 9 kube-system pods found
	I1213 12:01:53.607812  636421 system_pods.go:89] "calico-kube-controllers-5c676f698c-ngwms" [a292d8e6-4b73-4bfa-b772-208e02d29275] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 12:01:53.607824  636421 system_pods.go:89] "calico-node-nhhx8" [962d4003-c8af-4d22-90ed-c717d0f14710] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 12:01:53.607858  636421 system_pods.go:89] "coredns-66bc5c9577-52bt2" [515faea1-62b2-4037-837f-7a29e65cb091] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:01:53.607871  636421 system_pods.go:89] "etcd-calico-270721" [56111c3e-cd9a-4080-906e-ee6a8d0ea025] Running
	I1213 12:01:53.607876  636421 system_pods.go:89] "kube-apiserver-calico-270721" [1555476b-b3ff-4597-8943-87a94bdfa587] Running
	I1213 12:01:53.607881  636421 system_pods.go:89] "kube-controller-manager-calico-270721" [87f52309-0bf1-44cd-b4d3-fbf2a8a47efd] Running
	I1213 12:01:53.607892  636421 system_pods.go:89] "kube-proxy-795cl" [61c72953-5b44-42a1-bd5b-3e58fcade8da] Running
	I1213 12:01:53.607896  636421 system_pods.go:89] "kube-scheduler-calico-270721" [cb79461a-efa8-492c-94fb-9dc3129b4461] Running
	I1213 12:01:53.607902  636421 system_pods.go:89] "storage-provisioner" [dbd68185-356e-4e5d-9975-22505963e2e6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 12:01:53.607943  636421 retry.go:31] will retry after 278.453721ms: missing components: kube-dns
	I1213 12:01:53.894090  636421 system_pods.go:86] 9 kube-system pods found
	I1213 12:01:53.894128  636421 system_pods.go:89] "calico-kube-controllers-5c676f698c-ngwms" [a292d8e6-4b73-4bfa-b772-208e02d29275] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 12:01:53.894138  636421 system_pods.go:89] "calico-node-nhhx8" [962d4003-c8af-4d22-90ed-c717d0f14710] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 12:01:53.894173  636421 system_pods.go:89] "coredns-66bc5c9577-52bt2" [515faea1-62b2-4037-837f-7a29e65cb091] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:01:53.894187  636421 system_pods.go:89] "etcd-calico-270721" [56111c3e-cd9a-4080-906e-ee6a8d0ea025] Running
	I1213 12:01:53.894193  636421 system_pods.go:89] "kube-apiserver-calico-270721" [1555476b-b3ff-4597-8943-87a94bdfa587] Running
	I1213 12:01:53.894198  636421 system_pods.go:89] "kube-controller-manager-calico-270721" [87f52309-0bf1-44cd-b4d3-fbf2a8a47efd] Running
	I1213 12:01:53.894209  636421 system_pods.go:89] "kube-proxy-795cl" [61c72953-5b44-42a1-bd5b-3e58fcade8da] Running
	I1213 12:01:53.894213  636421 system_pods.go:89] "kube-scheduler-calico-270721" [cb79461a-efa8-492c-94fb-9dc3129b4461] Running
	I1213 12:01:53.894220  636421 system_pods.go:89] "storage-provisioner" [dbd68185-356e-4e5d-9975-22505963e2e6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 12:01:53.894255  636421 retry.go:31] will retry after 344.732105ms: missing components: kube-dns
	I1213 12:01:54.243932  636421 system_pods.go:86] 9 kube-system pods found
	I1213 12:01:54.243974  636421 system_pods.go:89] "calico-kube-controllers-5c676f698c-ngwms" [a292d8e6-4b73-4bfa-b772-208e02d29275] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 12:01:54.243984  636421 system_pods.go:89] "calico-node-nhhx8" [962d4003-c8af-4d22-90ed-c717d0f14710] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 12:01:54.243992  636421 system_pods.go:89] "coredns-66bc5c9577-52bt2" [515faea1-62b2-4037-837f-7a29e65cb091] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:01:54.243998  636421 system_pods.go:89] "etcd-calico-270721" [56111c3e-cd9a-4080-906e-ee6a8d0ea025] Running
	I1213 12:01:54.244004  636421 system_pods.go:89] "kube-apiserver-calico-270721" [1555476b-b3ff-4597-8943-87a94bdfa587] Running
	I1213 12:01:54.244008  636421 system_pods.go:89] "kube-controller-manager-calico-270721" [87f52309-0bf1-44cd-b4d3-fbf2a8a47efd] Running
	I1213 12:01:54.244018  636421 system_pods.go:89] "kube-proxy-795cl" [61c72953-5b44-42a1-bd5b-3e58fcade8da] Running
	I1213 12:01:54.244022  636421 system_pods.go:89] "kube-scheduler-calico-270721" [cb79461a-efa8-492c-94fb-9dc3129b4461] Running
	I1213 12:01:54.244028  636421 system_pods.go:89] "storage-provisioner" [dbd68185-356e-4e5d-9975-22505963e2e6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 12:01:54.244052  636421 retry.go:31] will retry after 337.620243ms: missing components: kube-dns
	I1213 12:01:54.586161  636421 system_pods.go:86] 9 kube-system pods found
	I1213 12:01:54.586200  636421 system_pods.go:89] "calico-kube-controllers-5c676f698c-ngwms" [a292d8e6-4b73-4bfa-b772-208e02d29275] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 12:01:54.586219  636421 system_pods.go:89] "calico-node-nhhx8" [962d4003-c8af-4d22-90ed-c717d0f14710] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 12:01:54.586227  636421 system_pods.go:89] "coredns-66bc5c9577-52bt2" [515faea1-62b2-4037-837f-7a29e65cb091] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:01:54.586232  636421 system_pods.go:89] "etcd-calico-270721" [56111c3e-cd9a-4080-906e-ee6a8d0ea025] Running
	I1213 12:01:54.586238  636421 system_pods.go:89] "kube-apiserver-calico-270721" [1555476b-b3ff-4597-8943-87a94bdfa587] Running
	I1213 12:01:54.586248  636421 system_pods.go:89] "kube-controller-manager-calico-270721" [87f52309-0bf1-44cd-b4d3-fbf2a8a47efd] Running
	I1213 12:01:54.586253  636421 system_pods.go:89] "kube-proxy-795cl" [61c72953-5b44-42a1-bd5b-3e58fcade8da] Running
	I1213 12:01:54.586260  636421 system_pods.go:89] "kube-scheduler-calico-270721" [cb79461a-efa8-492c-94fb-9dc3129b4461] Running
	I1213 12:01:54.586264  636421 system_pods.go:89] "storage-provisioner" [dbd68185-356e-4e5d-9975-22505963e2e6] Running
	I1213 12:01:54.586279  636421 retry.go:31] will retry after 527.588989ms: missing components: kube-dns
	I1213 12:01:55.120586  636421 system_pods.go:86] 9 kube-system pods found
	I1213 12:01:55.120625  636421 system_pods.go:89] "calico-kube-controllers-5c676f698c-ngwms" [a292d8e6-4b73-4bfa-b772-208e02d29275] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 12:01:55.120636  636421 system_pods.go:89] "calico-node-nhhx8" [962d4003-c8af-4d22-90ed-c717d0f14710] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 12:01:55.120669  636421 system_pods.go:89] "coredns-66bc5c9577-52bt2" [515faea1-62b2-4037-837f-7a29e65cb091] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:01:55.120675  636421 system_pods.go:89] "etcd-calico-270721" [56111c3e-cd9a-4080-906e-ee6a8d0ea025] Running
	I1213 12:01:55.120680  636421 system_pods.go:89] "kube-apiserver-calico-270721" [1555476b-b3ff-4597-8943-87a94bdfa587] Running
	I1213 12:01:55.120686  636421 system_pods.go:89] "kube-controller-manager-calico-270721" [87f52309-0bf1-44cd-b4d3-fbf2a8a47efd] Running
	I1213 12:01:55.120698  636421 system_pods.go:89] "kube-proxy-795cl" [61c72953-5b44-42a1-bd5b-3e58fcade8da] Running
	I1213 12:01:55.120702  636421 system_pods.go:89] "kube-scheduler-calico-270721" [cb79461a-efa8-492c-94fb-9dc3129b4461] Running
	I1213 12:01:55.120707  636421 system_pods.go:89] "storage-provisioner" [dbd68185-356e-4e5d-9975-22505963e2e6] Running
	I1213 12:01:55.120731  636421 retry.go:31] will retry after 512.68838ms: missing components: kube-dns
	I1213 12:01:55.637826  636421 system_pods.go:86] 9 kube-system pods found
	I1213 12:01:55.637864  636421 system_pods.go:89] "calico-kube-controllers-5c676f698c-ngwms" [a292d8e6-4b73-4bfa-b772-208e02d29275] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 12:01:55.637874  636421 system_pods.go:89] "calico-node-nhhx8" [962d4003-c8af-4d22-90ed-c717d0f14710] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 12:01:55.637905  636421 system_pods.go:89] "coredns-66bc5c9577-52bt2" [515faea1-62b2-4037-837f-7a29e65cb091] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:01:55.637919  636421 system_pods.go:89] "etcd-calico-270721" [56111c3e-cd9a-4080-906e-ee6a8d0ea025] Running
	I1213 12:01:55.637925  636421 system_pods.go:89] "kube-apiserver-calico-270721" [1555476b-b3ff-4597-8943-87a94bdfa587] Running
	I1213 12:01:55.637930  636421 system_pods.go:89] "kube-controller-manager-calico-270721" [87f52309-0bf1-44cd-b4d3-fbf2a8a47efd] Running
	I1213 12:01:55.637934  636421 system_pods.go:89] "kube-proxy-795cl" [61c72953-5b44-42a1-bd5b-3e58fcade8da] Running
	I1213 12:01:55.637941  636421 system_pods.go:89] "kube-scheduler-calico-270721" [cb79461a-efa8-492c-94fb-9dc3129b4461] Running
	I1213 12:01:55.637949  636421 system_pods.go:89] "storage-provisioner" [dbd68185-356e-4e5d-9975-22505963e2e6] Running
	I1213 12:01:55.637973  636421 retry.go:31] will retry after 922.601222ms: missing components: kube-dns
	I1213 12:01:56.564981  636421 system_pods.go:86] 9 kube-system pods found
	I1213 12:01:56.565019  636421 system_pods.go:89] "calico-kube-controllers-5c676f698c-ngwms" [a292d8e6-4b73-4bfa-b772-208e02d29275] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1213 12:01:56.565029  636421 system_pods.go:89] "calico-node-nhhx8" [962d4003-c8af-4d22-90ed-c717d0f14710] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1213 12:01:56.565037  636421 system_pods.go:89] "coredns-66bc5c9577-52bt2" [515faea1-62b2-4037-837f-7a29e65cb091] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:01:56.565042  636421 system_pods.go:89] "etcd-calico-270721" [56111c3e-cd9a-4080-906e-ee6a8d0ea025] Running
	I1213 12:01:56.565049  636421 system_pods.go:89] "kube-apiserver-calico-270721" [1555476b-b3ff-4597-8943-87a94bdfa587] Running
	I1213 12:01:56.565054  636421 system_pods.go:89] "kube-controller-manager-calico-270721" [87f52309-0bf1-44cd-b4d3-fbf2a8a47efd] Running
	I1213 12:01:56.565058  636421 system_pods.go:89] "kube-proxy-795cl" [61c72953-5b44-42a1-bd5b-3e58fcade8da] Running
	I1213 12:01:56.565069  636421 system_pods.go:89] "kube-scheduler-calico-270721" [cb79461a-efa8-492c-94fb-9dc3129b4461] Running
	I1213 12:01:56.565073  636421 system_pods.go:89] "storage-provisioner" [dbd68185-356e-4e5d-9975-22505963e2e6] Running
	I1213 12:01:56.565087  636421 retry.go:31] will retry after 1.101039649s: missing components: kube-dns
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.850948040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.850964713Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851002933Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851021788Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851032094Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851043467Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851052681Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851068796Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851086577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851121301Z" level=info msg="Connect containerd service"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851401698Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851964747Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.867726494Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.868237695Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.868561226Z" level=info msg="Start subscribing containerd event"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.868632505Z" level=info msg="Start recovering state"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.889278015Z" level=info msg="Start event monitor"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.889343254Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.889355102Z" level=info msg="Start streaming server"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.889372054Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.889392994Z" level=info msg="runtime interface starting up..."
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.889400510Z" level=info msg="starting plugins..."
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.889437261Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 11:46:53 no-preload-333352 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.891303551Z" level=info msg="containerd successfully booted in 0.061815s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:01:59.721222    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:59.723171    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:59.727295    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:59.728043    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:59.729549    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +12.907693] overlayfs: idmapped layers are currently not supported
	[Dec13 09:25] overlayfs: idmapped layers are currently not supported
	[ +26.192425] overlayfs: idmapped layers are currently not supported
	[Dec13 09:26] overlayfs: idmapped layers are currently not supported
	[ +25.729788] overlayfs: idmapped layers are currently not supported
	[Dec13 09:27] overlayfs: idmapped layers are currently not supported
	[Dec13 09:28] overlayfs: idmapped layers are currently not supported
	[Dec13 09:31] overlayfs: idmapped layers are currently not supported
	[Dec13 09:32] overlayfs: idmapped layers are currently not supported
	[ +17.979093] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec13 09:43] overlayfs: idmapped layers are currently not supported
	[Dec13 09:45] overlayfs: idmapped layers are currently not supported
	[ +25.885102] overlayfs: idmapped layers are currently not supported
	[Dec13 09:46] overlayfs: idmapped layers are currently not supported
	[ +22.078149] overlayfs: idmapped layers are currently not supported
	[Dec13 09:47] overlayfs: idmapped layers are currently not supported
	[Dec13 09:48] overlayfs: idmapped layers are currently not supported
	[Dec13 09:49] overlayfs: idmapped layers are currently not supported
	[Dec13 09:51] overlayfs: idmapped layers are currently not supported
	[ +17.043564] overlayfs: idmapped layers are currently not supported
	[Dec13 09:52] overlayfs: idmapped layers are currently not supported
	[Dec13 09:53] overlayfs: idmapped layers are currently not supported
	[Dec13 10:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:19] hrtimer: interrupt took 21247146 ns
	
	
	==> kernel <==
	 12:01:59 up  4:44,  0 user,  load average: 1.54, 1.47, 1.42
	Linux no-preload-333352 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 12:01:56 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:01:57 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1202.
	Dec 13 12:01:57 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:01:57 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:01:57 no-preload-333352 kubelet[8007]: E1213 12:01:57.460876    8007 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:01:57 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:01:57 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:01:58 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1203.
	Dec 13 12:01:58 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:01:58 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:01:58 no-preload-333352 kubelet[8020]: E1213 12:01:58.265616    8020 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:01:58 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:01:58 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:01:58 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1204.
	Dec 13 12:01:58 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:01:58 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:01:59 no-preload-333352 kubelet[8049]: E1213 12:01:59.006726    8049 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:01:59 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:01:59 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:01:59 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1205.
	Dec 13 12:01:59 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:01:59 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:01:59 no-preload-333352 kubelet[8142]: E1213 12:01:59.776221    8142 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:01:59 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:01:59 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-333352 -n no-preload-333352
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-333352 -n no-preload-333352: exit status 2 (712.52274ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-333352" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (9.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-796924 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-796924 -n newest-cni-796924
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-796924 -n newest-cni-796924: exit status 2 (316.77014ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-796924 -n newest-cni-796924
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-796924 -n newest-cni-796924: exit status 2 (327.343915ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-796924 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-796924 -n newest-cni-796924
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-796924 -n newest-cni-796924: exit status 2 (316.976534ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-796924 -n newest-cni-796924
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-796924 -n newest-cni-796924: exit status 2 (309.043032ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-796924
helpers_test.go:244: (dbg) docker inspect newest-cni-796924:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273",
	        "Created": "2025-12-13T11:41:45.560617227Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 604142,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:51:48.770524373Z",
	            "FinishedAt": "2025-12-13T11:51:47.382046067Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273/hostname",
	        "HostsPath": "/var/lib/docker/containers/27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273/hosts",
	        "LogPath": "/var/lib/docker/containers/27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273/27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273-json.log",
	        "Name": "/newest-cni-796924",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-796924:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-796924",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273",
	                "LowerDir": "/var/lib/docker/overlay2/e3273dd39227f26c709bd7f69a32b4360acb8650246414f8098119d5f28e0f70-init/diff:/var/lib/docker/overlay2/5d0ca86320f2c06765995d644a73af30cd6ddb36f5c1a2a6b1ebb24af53cdabd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e3273dd39227f26c709bd7f69a32b4360acb8650246414f8098119d5f28e0f70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e3273dd39227f26c709bd7f69a32b4360acb8650246414f8098119d5f28e0f70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e3273dd39227f26c709bd7f69a32b4360acb8650246414f8098119d5f28e0f70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-796924",
	                "Source": "/var/lib/docker/volumes/newest-cni-796924/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-796924",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-796924",
	                "name.minikube.sigs.k8s.io": "newest-cni-796924",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b9bb40aac9de7cd1274edecaff0f8eaf098acb0d5c0799c0a940ae7311a572ff",
	            "SandboxKey": "/var/run/docker/netns/b9bb40aac9de",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-796924": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:8b:15:a0:38:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "524b54a7afb58fdfadc2532a94da198ca12aafc23248ec4905999b39dfe064e0",
	                    "EndpointID": "b589d458f24f437f5bf8379bb70662db004fdd873d4df2f7211ededbab3c7988",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-796924",
	                        "27aba94e8ede"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-796924 -n newest-cni-796924
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-796924 -n newest-cni-796924: exit status 2 (367.302719ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-796924 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-796924 logs -n 25: (1.628467378s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p embed-certs-951675                                                                                                                                                                                                                                      │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p embed-certs-951675                                                                                                                                                                                                                                      │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p disable-driver-mounts-823668                                                                                                                                                                                                                            │ disable-driver-mounts-823668 │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ start   │ -p default-k8s-diff-port-191845 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:40 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-191845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ stop    │ -p default-k8s-diff-port-191845 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-191845 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ start   │ -p default-k8s-diff-port-191845 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:41 UTC │
	│ image   │ default-k8s-diff-port-191845 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ pause   │ -p default-k8s-diff-port-191845 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ unpause │ -p default-k8s-diff-port-191845 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ delete  │ -p default-k8s-diff-port-191845                                                                                                                                                                                                                            │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ delete  │ -p default-k8s-diff-port-191845                                                                                                                                                                                                                            │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ start   │ -p newest-cni-796924 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-333352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ stop    │ -p no-preload-333352 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ addons  │ enable dashboard -p no-preload-333352 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ start   │ -p no-preload-333352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-796924 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │                     │
	│ stop    │ -p newest-cni-796924 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p newest-cni-796924 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p newest-cni-796924 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │                     │
	│ image   │ newest-cni-796924 image list --format=json                                                                                                                                                                                                                 │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:58 UTC │ 13 Dec 25 11:58 UTC │
	│ pause   │ -p newest-cni-796924 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:58 UTC │ 13 Dec 25 11:58 UTC │
	│ unpause │ -p newest-cni-796924 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:58 UTC │ 13 Dec 25 11:58 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:51:48
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:51:48.463604  604010 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:51:48.463796  604010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:51:48.463823  604010 out.go:374] Setting ErrFile to fd 2...
	I1213 11:51:48.463842  604010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:51:48.464235  604010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 11:51:48.465119  604010 out.go:368] Setting JSON to false
	I1213 11:51:48.466102  604010 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":16461,"bootTime":1765610247,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 11:51:48.466204  604010 start.go:143] virtualization:  
	I1213 11:51:48.469444  604010 out.go:179] * [newest-cni-796924] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:51:48.473497  604010 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:51:48.473608  604010 notify.go:221] Checking for updates...
	I1213 11:51:48.479464  604010 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:51:48.482541  604010 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:51:48.485448  604010 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 11:51:48.488462  604010 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:51:48.491424  604010 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:51:48.494980  604010 config.go:182] Loaded profile config "newest-cni-796924": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:51:48.495553  604010 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:51:48.518013  604010 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:51:48.518194  604010 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:51:48.596406  604010 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:51:48.586781308 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:51:48.596541  604010 docker.go:319] overlay module found
	I1213 11:51:48.599865  604010 out.go:179] * Using the docker driver based on existing profile
	I1213 11:51:48.602647  604010 start.go:309] selected driver: docker
	I1213 11:51:48.602672  604010 start.go:927] validating driver "docker" against &{Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:51:48.602834  604010 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:51:48.603569  604010 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:51:48.671569  604010 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:51:48.654666754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:51:48.671930  604010 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 11:51:48.671965  604010 cni.go:84] Creating CNI manager for ""
	I1213 11:51:48.672022  604010 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:51:48.672078  604010 start.go:353] cluster config:
	{Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:51:48.675265  604010 out.go:179] * Starting "newest-cni-796924" primary control-plane node in "newest-cni-796924" cluster
	I1213 11:51:48.678207  604010 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 11:51:48.681114  604010 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:51:48.683920  604010 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:51:48.683976  604010 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 11:51:48.683989  604010 cache.go:65] Caching tarball of preloaded images
	I1213 11:51:48.684102  604010 preload.go:238] Found /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 11:51:48.684116  604010 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 11:51:48.684232  604010 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/config.json ...
	I1213 11:51:48.684464  604010 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:51:48.711458  604010 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:51:48.711481  604010 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:51:48.711496  604010 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:51:48.711527  604010 start.go:360] acquireMachinesLock for newest-cni-796924: {Name:mkb23dc851632c47983afd0f3cb215d071a4c6d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:51:48.711588  604010 start.go:364] duration metric: took 38.818µs to acquireMachinesLock for "newest-cni-796924"
	I1213 11:51:48.711608  604010 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:51:48.711613  604010 fix.go:54] fixHost starting: 
	I1213 11:51:48.711888  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:48.735758  604010 fix.go:112] recreateIfNeeded on newest-cni-796924: state=Stopped err=<nil>
	W1213 11:51:48.735799  604010 fix.go:138] unexpected machine state, will restart: <nil>
	W1213 11:51:48.171125  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:50.670988  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:51:48.739083  604010 out.go:252] * Restarting existing docker container for "newest-cni-796924" ...
	I1213 11:51:48.739191  604010 cli_runner.go:164] Run: docker start newest-cni-796924
	I1213 11:51:48.989234  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:49.013708  604010 kic.go:430] container "newest-cni-796924" state is running.
	I1213 11:51:49.014143  604010 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:51:49.035818  604010 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/config.json ...
	I1213 11:51:49.036044  604010 machine.go:94] provisionDockerMachine start ...
	I1213 11:51:49.036107  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:49.066663  604010 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:49.067143  604010 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1213 11:51:49.067157  604010 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:51:49.067832  604010 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47590->127.0.0.1:33440: read: connection reset by peer
	I1213 11:51:52.226322  604010 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-796924
	
	I1213 11:51:52.226353  604010 ubuntu.go:182] provisioning hostname "newest-cni-796924"
	I1213 11:51:52.226417  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:52.244890  604010 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:52.245240  604010 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1213 11:51:52.245259  604010 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-796924 && echo "newest-cni-796924" | sudo tee /etc/hostname
	I1213 11:51:52.409909  604010 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-796924
	
	I1213 11:51:52.410005  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:52.440908  604010 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:52.441219  604010 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1213 11:51:52.441235  604010 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-796924' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-796924/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-796924' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:51:52.595320  604010 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:51:52.595345  604010 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 11:51:52.595378  604010 ubuntu.go:190] setting up certificates
	I1213 11:51:52.595395  604010 provision.go:84] configureAuth start
	I1213 11:51:52.595456  604010 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:51:52.612730  604010 provision.go:143] copyHostCerts
	I1213 11:51:52.612805  604010 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 11:51:52.612815  604010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 11:51:52.612893  604010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 11:51:52.612991  604010 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 11:51:52.612997  604010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 11:51:52.613022  604010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 11:51:52.613072  604010 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 11:51:52.613077  604010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 11:51:52.613099  604010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 11:51:52.613145  604010 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.newest-cni-796924 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-796924]
	I1213 11:51:52.732846  604010 provision.go:177] copyRemoteCerts
	I1213 11:51:52.732930  604010 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:51:52.732973  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:52.750653  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:52.855439  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:51:52.874016  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 11:51:52.892129  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 11:51:52.911103  604010 provision.go:87] duration metric: took 315.684656ms to configureAuth
	I1213 11:51:52.911132  604010 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:51:52.911332  604010 config.go:182] Loaded profile config "newest-cni-796924": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:51:52.911340  604010 machine.go:97] duration metric: took 3.875289031s to provisionDockerMachine
	I1213 11:51:52.911347  604010 start.go:293] postStartSetup for "newest-cni-796924" (driver="docker")
	I1213 11:51:52.911359  604010 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:51:52.911407  604010 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:51:52.911460  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:52.929094  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:53.034971  604010 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:51:53.038558  604010 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:51:53.038590  604010 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:51:53.038602  604010 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 11:51:53.038659  604010 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 11:51:53.038763  604010 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 11:51:53.038874  604010 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:51:53.046532  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:51:53.064751  604010 start.go:296] duration metric: took 153.388066ms for postStartSetup
	I1213 11:51:53.064850  604010 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:51:53.064897  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:53.083055  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:53.186537  604010 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:51:53.194814  604010 fix.go:56] duration metric: took 4.483190974s for fixHost
	I1213 11:51:53.194902  604010 start.go:83] releasing machines lock for "newest-cni-796924", held for 4.483304896s
	I1213 11:51:53.195014  604010 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:51:53.218858  604010 ssh_runner.go:195] Run: cat /version.json
	I1213 11:51:53.218911  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:53.219425  604010 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:51:53.219496  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:53.245887  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:53.248082  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:53.440734  604010 ssh_runner.go:195] Run: systemctl --version
	I1213 11:51:53.447618  604010 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:51:53.452306  604010 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:51:53.452441  604010 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:51:53.460789  604010 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 11:51:53.460813  604010 start.go:496] detecting cgroup driver to use...
	I1213 11:51:53.460876  604010 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:51:53.460961  604010 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:51:53.478830  604010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:51:53.493048  604010 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:51:53.493110  604010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:51:53.509243  604010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:51:53.522928  604010 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:51:53.639237  604010 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:51:53.752852  604010 docker.go:234] disabling docker service ...
	I1213 11:51:53.752960  604010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:51:53.768708  604010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:51:53.782124  604010 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:51:53.903168  604010 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:51:54.054509  604010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:51:54.067985  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:51:54.083550  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 11:51:54.093447  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:51:54.102944  604010 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:51:54.103048  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:51:54.112424  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:51:54.121802  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:51:54.130945  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:51:54.140080  604010 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:51:54.148567  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:51:54.157935  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:51:54.167456  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:51:54.176969  604010 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:51:54.184730  604010 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:51:54.192410  604010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:51:54.297614  604010 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:51:54.415943  604010 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 11:51:54.416062  604010 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 11:51:54.419918  604010 start.go:564] Will wait 60s for crictl version
	I1213 11:51:54.420004  604010 ssh_runner.go:195] Run: which crictl
	I1213 11:51:54.424003  604010 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:51:54.449039  604010 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 11:51:54.449144  604010 ssh_runner.go:195] Run: containerd --version
	I1213 11:51:54.473383  604010 ssh_runner.go:195] Run: containerd --version
	I1213 11:51:54.499419  604010 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 11:51:54.502369  604010 cli_runner.go:164] Run: docker network inspect newest-cni-796924 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:51:54.518648  604010 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 11:51:54.522791  604010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:51:54.535931  604010 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 11:51:54.538956  604010 kubeadm.go:884] updating cluster {Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:51:54.539121  604010 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:51:54.539232  604010 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:51:54.563801  604010 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 11:51:54.563827  604010 containerd.go:534] Images already preloaded, skipping extraction
	I1213 11:51:54.563893  604010 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:51:54.592245  604010 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 11:51:54.592267  604010 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:51:54.592274  604010 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 11:51:54.592392  604010 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-796924 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:51:54.592461  604010 ssh_runner.go:195] Run: sudo crictl info
	I1213 11:51:54.621799  604010 cni.go:84] Creating CNI manager for ""
	I1213 11:51:54.621822  604010 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:51:54.621841  604010 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 11:51:54.621863  604010 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-796924 NodeName:newest-cni-796924 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:51:54.621977  604010 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-796924"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:51:54.622049  604010 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:51:54.629798  604010 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:51:54.629892  604010 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:51:54.637447  604010 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 11:51:54.650384  604010 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:51:54.666817  604010 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 11:51:54.689998  604010 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:51:54.695776  604010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:51:54.710482  604010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:51:54.832824  604010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:51:54.850492  604010 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924 for IP: 192.168.76.2
	I1213 11:51:54.850566  604010 certs.go:195] generating shared ca certs ...
	I1213 11:51:54.850597  604010 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:54.850790  604010 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 11:51:54.850872  604010 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 11:51:54.850895  604010 certs.go:257] generating profile certs ...
	I1213 11:51:54.851026  604010 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.key
	I1213 11:51:54.851129  604010 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key.ced45374
	I1213 11:51:54.851211  604010 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key
	I1213 11:51:54.851379  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 11:51:54.851441  604010 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 11:51:54.851467  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:51:54.851513  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:51:54.851568  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:51:54.851620  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 11:51:54.851698  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:51:54.852295  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:51:54.879994  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:51:54.900131  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:51:54.919515  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:51:54.939840  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 11:51:54.959348  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:51:54.977529  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:51:54.995648  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:51:55.023031  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 11:51:55.043814  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:51:55.063273  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 11:51:55.083198  604010 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:51:55.097732  604010 ssh_runner.go:195] Run: openssl version
	I1213 11:51:55.104458  604010 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 11:51:55.112443  604010 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 11:51:55.120212  604010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 11:51:55.124175  604010 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 11:51:55.124296  604010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 11:51:55.166612  604010 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:51:55.174931  604010 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:55.182763  604010 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:51:55.190655  604010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:55.194550  604010 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:55.194637  604010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:55.235820  604010 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:51:55.243647  604010 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 11:51:55.251252  604010 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 11:51:55.258979  604010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 11:51:55.263040  604010 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 11:51:55.263115  604010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 11:51:55.305815  604010 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:51:55.313358  604010 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:51:55.317228  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:51:55.358360  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:51:55.399354  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:51:55.440616  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:51:55.481788  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:51:55.527783  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:51:55.570548  604010 kubeadm.go:401] StartCluster: {Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:51:55.570648  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 11:51:55.570740  604010 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:51:55.597807  604010 cri.go:89] found id: ""
	I1213 11:51:55.597910  604010 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:51:55.605830  604010 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 11:51:55.605851  604010 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 11:51:55.605907  604010 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 11:51:55.613526  604010 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:51:55.614085  604010 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-796924" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:51:55.614332  604010 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-307042/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-796924" cluster setting kubeconfig missing "newest-cni-796924" context setting]
	I1213 11:51:55.614935  604010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:55.617326  604010 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 11:51:55.625376  604010 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1213 11:51:55.625455  604010 kubeadm.go:602] duration metric: took 19.59756ms to restartPrimaryControlPlane
	I1213 11:51:55.625473  604010 kubeadm.go:403] duration metric: took 54.935084ms to StartCluster
	I1213 11:51:55.625491  604010 settings.go:142] acquiring lock: {Name:mk079e9a25ebbc2c8fbae42d4c6ed096a652c00b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:55.625565  604010 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:51:55.626520  604010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:55.626793  604010 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 11:51:55.627185  604010 config.go:182] Loaded profile config "newest-cni-796924": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:51:55.627271  604010 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 11:51:55.627363  604010 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-796924"
	I1213 11:51:55.627383  604010 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-796924"
	I1213 11:51:55.627413  604010 host.go:66] Checking if "newest-cni-796924" exists ...
	I1213 11:51:55.627434  604010 addons.go:70] Setting dashboard=true in profile "newest-cni-796924"
	I1213 11:51:55.627450  604010 addons.go:239] Setting addon dashboard=true in "newest-cni-796924"
	W1213 11:51:55.627456  604010 addons.go:248] addon dashboard should already be in state true
	I1213 11:51:55.627477  604010 host.go:66] Checking if "newest-cni-796924" exists ...
	I1213 11:51:55.627878  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:55.628091  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:55.628783  604010 addons.go:70] Setting default-storageclass=true in profile "newest-cni-796924"
	I1213 11:51:55.628812  604010 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-796924"
	I1213 11:51:55.629112  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:55.631079  604010 out.go:179] * Verifying Kubernetes components...
	I1213 11:51:55.634139  604010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:51:55.667375  604010 addons.go:239] Setting addon default-storageclass=true in "newest-cni-796924"
	I1213 11:51:55.667423  604010 host.go:66] Checking if "newest-cni-796924" exists ...
	I1213 11:51:55.667842  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:55.688084  604010 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:51:55.691677  604010 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:51:55.691701  604010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 11:51:55.691785  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:55.697906  604010 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 11:51:55.697933  604010 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 11:51:55.698005  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:55.704903  604010 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 11:51:55.707765  604010 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1213 11:51:53.170873  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:55.171466  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:57.171707  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:51:55.710658  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 11:51:55.710701  604010 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 11:51:55.710771  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:55.754330  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:55.772597  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:55.773144  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:55.866635  604010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:51:55.926205  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:51:55.934055  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:51:55.957399  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 11:51:55.957444  604010 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 11:51:55.971225  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 11:51:55.971291  604010 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 11:51:56.007402  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 11:51:56.007444  604010 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 11:51:56.023097  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 11:51:56.023122  604010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 11:51:56.039306  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 11:51:56.039347  604010 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 11:51:56.054865  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 11:51:56.054892  604010 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 11:51:56.069056  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 11:51:56.069097  604010 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 11:51:56.083856  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 11:51:56.083885  604010 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 11:51:56.097577  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:51:56.097600  604010 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 11:51:56.111351  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:51:56.663977  604010 api_server.go:52] waiting for apiserver process to appear ...
	W1213 11:51:56.664058  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.664121  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:51:56.664172  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.664188  604010 retry.go:31] will retry after 289.236479ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.664122  604010 retry.go:31] will retry after 183.877549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:51:56.664453  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.664469  604010 retry.go:31] will retry after 218.899341ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.849187  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:51:56.883801  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:51:56.926668  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.926802  604010 retry.go:31] will retry after 241.089101ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.953849  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:51:56.985603  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.985688  604010 retry.go:31] will retry after 237.809149ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:51:57.026263  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.026297  604010 retry.go:31] will retry after 349.427803ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.164593  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:51:57.169067  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:51:57.224678  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:51:57.234523  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.234624  604010 retry.go:31] will retry after 787.051236ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:51:57.297371  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.297440  604010 retry.go:31] will retry after 317.469921ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.376456  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:51:57.452615  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.452649  604010 retry.go:31] will retry after 679.978714ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.616149  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:51:57.664727  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:51:57.701776  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.701820  604010 retry.go:31] will retry after 682.458958ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.022897  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:51:58.088105  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.088141  604010 retry.go:31] will retry after 475.463602ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.133516  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:51:58.165032  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:51:58.230626  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.230659  604010 retry.go:31] will retry after 634.421741ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.385149  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:51:58.461368  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.461471  604010 retry.go:31] will retry after 859.118132ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:51:59.671078  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:02.171305  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:51:58.564227  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:51:58.633858  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.633891  604010 retry.go:31] will retry after 1.632863719s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.665061  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:51:58.866071  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:51:58.936827  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.936859  604010 retry.go:31] will retry after 1.533813591s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:59.165263  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:51:59.321822  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:51:59.385607  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:59.385640  604010 retry.go:31] will retry after 2.101781304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:59.665231  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:00.164312  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:00.267962  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:52:00.471799  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:52:00.516223  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:00.516306  604010 retry.go:31] will retry after 1.542990826s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:52:00.569718  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:00.569762  604010 retry.go:31] will retry after 1.699392085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:00.664868  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:01.165071  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:01.487701  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:01.556576  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:01.556610  604010 retry.go:31] will retry after 1.79578881s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:01.665032  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:02.059588  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:02.123368  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:02.123421  604010 retry.go:31] will retry after 4.212258745s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:02.164643  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:02.270065  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:52:02.336655  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:02.336687  604010 retry.go:31] will retry after 2.291652574s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:02.665180  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:03.164491  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:03.353076  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:03.415819  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:03.415855  604010 retry.go:31] will retry after 3.520621119s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:52:04.171660  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:06.671628  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:03.664666  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:04.164990  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:04.629361  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:52:04.665164  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:04.695856  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:04.695887  604010 retry.go:31] will retry after 5.092647079s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:05.164583  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:05.665005  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:06.164298  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:06.336728  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:06.399256  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:06.399289  604010 retry.go:31] will retry after 2.548236052s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:06.664733  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:06.937128  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:07.007320  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:07.007359  604010 retry.go:31] will retry after 3.279734506s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:07.164482  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:07.664186  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:08.164259  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:09.170863  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:11.170983  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:08.664905  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:08.947682  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:09.039225  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:09.039255  604010 retry.go:31] will retry after 6.163469341s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:09.164651  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:09.664239  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:09.789499  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:52:09.850576  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:09.850610  604010 retry.go:31] will retry after 3.796434626s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:10.165090  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:10.288047  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:10.355227  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:10.355265  604010 retry.go:31] will retry after 7.010948619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:10.664471  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:11.165062  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:11.664272  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:12.164932  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:12.664657  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:13.164305  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:13.670824  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:15.671074  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:13.647328  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:52:13.664818  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:13.719910  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:13.719942  604010 retry.go:31] will retry after 9.330768854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:14.164344  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:14.664306  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:15.164242  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:15.203030  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:15.263577  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:15.263607  604010 retry.go:31] will retry after 8.190073233s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:15.664266  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:16.165207  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:16.664293  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:17.164467  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:17.367027  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:17.430899  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:17.430934  604010 retry.go:31] will retry after 13.887712507s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:17.664357  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:18.164881  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:18.170945  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:20.670832  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:18.664960  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:19.164308  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:19.665208  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:20.165105  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:20.664287  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:21.164362  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:21.664274  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:22.164288  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:22.665206  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:23.051577  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:52:23.111902  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:23.111935  604010 retry.go:31] will retry after 11.527342508s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:23.165176  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:23.453917  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:23.170872  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:25.171346  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:27.171433  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:23.521291  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:23.521324  604010 retry.go:31] will retry after 14.842315117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:23.664722  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:24.165113  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:24.664242  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:25.164277  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:25.664353  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:26.164245  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:26.664280  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:27.164344  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:27.664260  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:28.164294  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:29.670795  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:31.671822  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:28.664213  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:29.165160  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:29.664269  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:30.165128  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:30.664169  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:31.164314  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:31.319227  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:31.384220  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:31.384257  604010 retry.go:31] will retry after 14.168397615s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:31.664303  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:32.164990  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:32.664299  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:33.164301  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:34.171181  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:36.670803  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:33.664641  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:34.164270  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:34.639887  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:52:34.664451  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:34.713642  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:34.713678  604010 retry.go:31] will retry after 21.545330114s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:35.164160  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:35.665036  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:36.164253  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:36.664233  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:37.164426  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:37.664423  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:38.164585  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:38.364338  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:38.426452  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:38.426486  604010 retry.go:31] will retry after 16.958085374s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:52:38.670951  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:41.170820  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:38.665187  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:39.164590  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:39.665128  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:40.164295  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:40.664289  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:41.164238  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:41.664308  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:42.164562  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:42.664974  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:43.164327  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:43.170883  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:45.172031  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:47.670782  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:43.664236  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:44.164970  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:44.664271  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:45.164423  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:45.553023  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:45.614931  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:45.614965  604010 retry.go:31] will retry after 19.954026213s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:45.665141  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:46.164288  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:46.664717  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:47.164232  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:47.664844  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:48.164283  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:50.171769  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:52.671828  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:48.665063  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:49.164283  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:49.664430  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:50.165168  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:50.665085  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:51.164301  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:51.664309  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:52.165148  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:52.664704  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:53.164339  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:55.170984  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:56.171498  596998 node_ready.go:38] duration metric: took 6m0.001140759s for node "no-preload-333352" to be "Ready" ...
	I1213 11:52:56.174587  596998 out.go:203] 
	W1213 11:52:56.177556  596998 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 11:52:56.177585  596998 out.go:285] * 
	W1213 11:52:56.179740  596998 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:52:56.182759  596998 out.go:203] 
	I1213 11:52:53.664699  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:54.164840  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:54.664218  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:55.165093  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:55.385630  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:55.504689  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:55.504722  604010 retry.go:31] will retry after 37.277266145s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:55.664229  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:52:55.664327  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:52:55.694796  604010 cri.go:89] found id: ""
	I1213 11:52:55.694825  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.694835  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:52:55.694843  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:52:55.694903  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:52:55.723663  604010 cri.go:89] found id: ""
	I1213 11:52:55.723688  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.723697  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:52:55.723704  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:52:55.723763  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:52:55.748991  604010 cri.go:89] found id: ""
	I1213 11:52:55.749019  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.749027  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:52:55.749034  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:52:55.749096  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:52:55.774258  604010 cri.go:89] found id: ""
	I1213 11:52:55.774281  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.774290  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:52:55.774297  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:52:55.774355  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:52:55.798762  604010 cri.go:89] found id: ""
	I1213 11:52:55.798788  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.798796  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:52:55.798802  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:52:55.798861  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:52:55.823037  604010 cri.go:89] found id: ""
	I1213 11:52:55.823063  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.823071  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:52:55.823078  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:52:55.823139  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:52:55.847241  604010 cri.go:89] found id: ""
	I1213 11:52:55.847267  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.847276  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:52:55.847283  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:52:55.847343  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:52:55.872394  604010 cri.go:89] found id: ""
	I1213 11:52:55.872464  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.872488  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:52:55.872505  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:52:55.872518  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:52:55.888592  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:52:55.888623  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:52:55.954582  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:52:55.945990    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.946863    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.948347    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.948763    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.950227    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:52:55.945990    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.946863    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.948347    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.948763    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.950227    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:52:55.954616  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:52:55.954629  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:52:55.979360  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:52:55.979393  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:52:56.015953  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:52:56.015986  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:52:56.262345  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:52:56.407172  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:56.407203  604010 retry.go:31] will retry after 30.096993011s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:58.574217  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:58.585863  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:52:58.585937  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:52:58.613052  604010 cri.go:89] found id: ""
	I1213 11:52:58.613084  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.613094  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:52:58.613102  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:52:58.613187  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:52:58.639217  604010 cri.go:89] found id: ""
	I1213 11:52:58.639241  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.639250  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:52:58.639256  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:52:58.639323  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:52:58.691503  604010 cri.go:89] found id: ""
	I1213 11:52:58.691529  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.691539  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:52:58.691545  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:52:58.691607  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:52:58.739302  604010 cri.go:89] found id: ""
	I1213 11:52:58.739330  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.739339  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:52:58.739345  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:52:58.739407  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:52:58.768957  604010 cri.go:89] found id: ""
	I1213 11:52:58.768985  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.768994  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:52:58.769001  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:52:58.769114  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:52:58.794144  604010 cri.go:89] found id: ""
	I1213 11:52:58.794172  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.794181  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:52:58.794188  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:52:58.794248  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:52:58.818208  604010 cri.go:89] found id: ""
	I1213 11:52:58.818234  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.818243  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:52:58.818250  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:52:58.818307  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:52:58.841575  604010 cri.go:89] found id: ""
	I1213 11:52:58.841600  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.841613  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:52:58.841622  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:52:58.841636  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:52:58.867434  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:52:58.867469  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:52:58.898944  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:52:58.898974  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:52:58.954613  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:52:58.954649  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:52:58.970766  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:52:58.970842  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:52:59.034290  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:52:59.026403    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.026973    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.028473    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.028883    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.030363    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:52:59.026403    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.026973    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.028473    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.028883    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.030363    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:01.534586  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:01.545484  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:01.545555  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:01.572215  604010 cri.go:89] found id: ""
	I1213 11:53:01.572288  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.572302  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:01.572310  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:01.572388  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:01.598159  604010 cri.go:89] found id: ""
	I1213 11:53:01.598188  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.598196  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:01.598203  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:01.598300  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:01.623153  604010 cri.go:89] found id: ""
	I1213 11:53:01.623177  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.623186  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:01.623195  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:01.623261  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:01.649622  604010 cri.go:89] found id: ""
	I1213 11:53:01.649644  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.649652  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:01.649659  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:01.649737  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:01.683094  604010 cri.go:89] found id: ""
	I1213 11:53:01.683119  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.683127  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:01.683133  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:01.683194  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:01.713141  604010 cri.go:89] found id: ""
	I1213 11:53:01.713209  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.713236  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:01.713255  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:01.713329  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:01.743530  604010 cri.go:89] found id: ""
	I1213 11:53:01.743598  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.743644  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:01.743659  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:01.743724  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:01.768540  604010 cri.go:89] found id: ""
	I1213 11:53:01.768567  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.768575  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:01.768585  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:01.768596  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:01.793626  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:01.793664  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:01.820553  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:01.820583  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:01.876734  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:01.876770  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:01.893351  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:01.893425  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:01.982105  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:01.970876    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.971602    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.973230    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.973588    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.977591    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:01.970876    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.971602    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.973230    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.973588    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.977591    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:04.482731  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:04.495226  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:04.495299  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:04.521792  604010 cri.go:89] found id: ""
	I1213 11:53:04.521819  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.521829  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:04.521836  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:04.521900  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:04.553223  604010 cri.go:89] found id: ""
	I1213 11:53:04.553249  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.553258  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:04.553264  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:04.553333  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:04.580024  604010 cri.go:89] found id: ""
	I1213 11:53:04.580049  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.580058  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:04.580064  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:04.580123  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:04.622013  604010 cri.go:89] found id: ""
	I1213 11:53:04.622041  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.622050  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:04.622057  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:04.622117  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:04.646212  604010 cri.go:89] found id: ""
	I1213 11:53:04.646236  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.646245  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:04.646251  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:04.646312  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:04.682129  604010 cri.go:89] found id: ""
	I1213 11:53:04.682156  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.682165  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:04.682171  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:04.682288  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:04.710645  604010 cri.go:89] found id: ""
	I1213 11:53:04.710675  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.710706  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:04.710714  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:04.710781  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:04.742882  604010 cri.go:89] found id: ""
	I1213 11:53:04.742906  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.742915  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:04.742926  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:04.742938  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:04.799010  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:04.799046  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:04.814626  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:04.814655  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:04.884663  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:04.876082    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.876754    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.878443    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.878819    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.880048    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:04.876082    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.876754    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.878443    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.878819    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.880048    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:04.884686  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:04.884717  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:04.910422  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:04.910589  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:05.570211  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:53:05.631760  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:53:05.631794  604010 retry.go:31] will retry after 44.542402529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:53:07.442499  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:07.453537  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:07.453615  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:07.482132  604010 cri.go:89] found id: ""
	I1213 11:53:07.482155  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.482163  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:07.482170  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:07.482229  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:07.506787  604010 cri.go:89] found id: ""
	I1213 11:53:07.506813  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.506823  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:07.506829  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:07.506890  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:07.532425  604010 cri.go:89] found id: ""
	I1213 11:53:07.532449  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.532458  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:07.532465  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:07.532527  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:07.557042  604010 cri.go:89] found id: ""
	I1213 11:53:07.557071  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.557081  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:07.557087  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:07.557147  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:07.581888  604010 cri.go:89] found id: ""
	I1213 11:53:07.581919  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.581934  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:07.581940  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:07.582000  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:07.605619  604010 cri.go:89] found id: ""
	I1213 11:53:07.605646  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.605655  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:07.605661  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:07.605722  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:07.631481  604010 cri.go:89] found id: ""
	I1213 11:53:07.631503  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.631511  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:07.631517  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:07.631574  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:07.656152  604010 cri.go:89] found id: ""
	I1213 11:53:07.656178  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.656187  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:07.656196  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:07.656207  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:07.738199  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:07.729773    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.730173    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.732061    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.732672    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.734342    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:07.729773    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.730173    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.732061    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.732672    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.734342    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:07.738218  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:07.738230  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:07.763561  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:07.763597  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:07.791032  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:07.791059  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:07.846125  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:07.846160  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:10.362523  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:10.372985  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:10.373056  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:10.397984  604010 cri.go:89] found id: ""
	I1213 11:53:10.398016  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.398037  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:10.398044  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:10.398121  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:10.423159  604010 cri.go:89] found id: ""
	I1213 11:53:10.423189  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.423198  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:10.423204  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:10.423266  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:10.447027  604010 cri.go:89] found id: ""
	I1213 11:53:10.447055  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.447064  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:10.447071  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:10.447131  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:10.472026  604010 cri.go:89] found id: ""
	I1213 11:53:10.472049  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.472057  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:10.472064  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:10.472122  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:10.503263  604010 cri.go:89] found id: ""
	I1213 11:53:10.503326  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.503352  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:10.503366  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:10.503440  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:10.532481  604010 cri.go:89] found id: ""
	I1213 11:53:10.532509  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.532518  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:10.532524  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:10.532587  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:10.557219  604010 cri.go:89] found id: ""
	I1213 11:53:10.557258  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.557266  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:10.557273  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:10.557342  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:10.585410  604010 cri.go:89] found id: ""
	I1213 11:53:10.585499  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.585522  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:10.585547  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:10.585587  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:10.611450  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:10.611488  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:10.639926  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:10.639954  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:10.696844  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:10.696881  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:10.713623  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:10.713657  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:10.777642  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:10.768681    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.769607    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.771307    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.771820    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.773703    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:10.768681    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.769607    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.771307    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.771820    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.773703    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:13.278890  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:13.289748  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:13.289817  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:13.317511  604010 cri.go:89] found id: ""
	I1213 11:53:13.317541  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.317550  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:13.317557  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:13.317618  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:13.343404  604010 cri.go:89] found id: ""
	I1213 11:53:13.343432  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.343441  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:13.343448  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:13.343503  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:13.369007  604010 cri.go:89] found id: ""
	I1213 11:53:13.369030  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.369039  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:13.369046  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:13.369108  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:13.395054  604010 cri.go:89] found id: ""
	I1213 11:53:13.395084  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.395094  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:13.395109  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:13.395171  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:13.424003  604010 cri.go:89] found id: ""
	I1213 11:53:13.424030  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.424039  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:13.424046  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:13.424105  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:13.448932  604010 cri.go:89] found id: ""
	I1213 11:53:13.449012  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.449029  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:13.449036  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:13.449112  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:13.474446  604010 cri.go:89] found id: ""
	I1213 11:53:13.474472  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.474481  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:13.474487  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:13.474611  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:13.501117  604010 cri.go:89] found id: ""
	I1213 11:53:13.501141  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.501150  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:13.501159  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:13.501171  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:13.557792  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:13.557829  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:13.574541  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:13.574574  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:13.639676  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:13.629891    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.631830    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.632611    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.634220    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.634886    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:13.629891    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.631830    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.632611    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.634220    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.634886    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:13.639700  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:13.639713  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:13.664830  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:13.664911  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:16.204971  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:16.215560  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:16.215635  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:16.240196  604010 cri.go:89] found id: ""
	I1213 11:53:16.240220  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.240229  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:16.240235  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:16.240293  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:16.265455  604010 cri.go:89] found id: ""
	I1213 11:53:16.265487  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.265497  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:16.265504  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:16.265562  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:16.289852  604010 cri.go:89] found id: ""
	I1213 11:53:16.289875  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.289886  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:16.289893  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:16.289954  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:16.315329  604010 cri.go:89] found id: ""
	I1213 11:53:16.315353  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.315362  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:16.315368  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:16.315433  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:16.346811  604010 cri.go:89] found id: ""
	I1213 11:53:16.346835  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.346844  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:16.346856  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:16.346916  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:16.371580  604010 cri.go:89] found id: ""
	I1213 11:53:16.371608  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.371617  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:16.371623  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:16.371759  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:16.397183  604010 cri.go:89] found id: ""
	I1213 11:53:16.397210  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.397219  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:16.397225  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:16.397286  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:16.422782  604010 cri.go:89] found id: ""
	I1213 11:53:16.422810  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.422821  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:16.422831  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:16.422848  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:16.478667  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:16.478714  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:16.494974  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:16.495011  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:16.560810  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:16.552790    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.553168    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.554711    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.555221    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.556703    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:16.552790    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.553168    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.554711    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.555221    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.556703    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:16.560835  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:16.560849  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:16.586263  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:16.586301  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:19.117851  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:19.128831  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:19.128899  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:19.156507  604010 cri.go:89] found id: ""
	I1213 11:53:19.156537  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.156546  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:19.156553  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:19.156619  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:19.184004  604010 cri.go:89] found id: ""
	I1213 11:53:19.184032  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.184041  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:19.184048  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:19.184108  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:19.210447  604010 cri.go:89] found id: ""
	I1213 11:53:19.210475  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.210485  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:19.210491  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:19.210563  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:19.243214  604010 cri.go:89] found id: ""
	I1213 11:53:19.243241  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.243250  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:19.243257  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:19.243317  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:19.267811  604010 cri.go:89] found id: ""
	I1213 11:53:19.267835  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.267845  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:19.267851  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:19.267912  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:19.291841  604010 cri.go:89] found id: ""
	I1213 11:53:19.291863  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.291872  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:19.291878  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:19.291942  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:19.316863  604010 cri.go:89] found id: ""
	I1213 11:53:19.316890  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.316898  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:19.316904  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:19.316963  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:19.341844  604010 cri.go:89] found id: ""
	I1213 11:53:19.341872  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.341881  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:19.341890  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:19.341901  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:19.397829  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:19.397868  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:19.413720  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:19.413749  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:19.481667  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:19.473280    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.474094    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.475625    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.476130    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.477751    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:19.473280    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.474094    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.475625    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.476130    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.477751    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:19.481694  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:19.481706  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:19.507029  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:19.507069  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:22.036187  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:22.047443  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:22.047516  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:22.073399  604010 cri.go:89] found id: ""
	I1213 11:53:22.073425  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.073433  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:22.073440  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:22.073519  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:22.102458  604010 cri.go:89] found id: ""
	I1213 11:53:22.102483  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.102492  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:22.102499  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:22.102564  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:22.127170  604010 cri.go:89] found id: ""
	I1213 11:53:22.127195  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.127203  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:22.127210  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:22.127270  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:22.152852  604010 cri.go:89] found id: ""
	I1213 11:53:22.152879  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.152887  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:22.152894  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:22.152972  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:22.194915  604010 cri.go:89] found id: ""
	I1213 11:53:22.194939  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.194947  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:22.194985  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:22.195074  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:22.228469  604010 cri.go:89] found id: ""
	I1213 11:53:22.228497  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.228507  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:22.228514  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:22.228574  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:22.257833  604010 cri.go:89] found id: ""
	I1213 11:53:22.257908  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.257931  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:22.257949  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:22.258038  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:22.283351  604010 cri.go:89] found id: ""
	I1213 11:53:22.283375  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.283385  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:22.283394  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:22.283425  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:22.339722  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:22.339759  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:22.358616  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:22.358649  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:22.425578  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:22.417365    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.418082    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.419768    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.420247    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.421786    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:22.417365    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.418082    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.419768    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.420247    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.421786    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:22.425645  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:22.425665  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:22.450867  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:22.450905  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:24.977642  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:24.988556  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:24.988625  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:25.016189  604010 cri.go:89] found id: ""
	I1213 11:53:25.016224  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.016247  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:25.016255  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:25.016320  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:25.044535  604010 cri.go:89] found id: ""
	I1213 11:53:25.044558  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.044567  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:25.044573  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:25.044632  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:25.070715  604010 cri.go:89] found id: ""
	I1213 11:53:25.070743  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.070752  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:25.070759  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:25.070822  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:25.096936  604010 cri.go:89] found id: ""
	I1213 11:53:25.096959  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.096967  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:25.096974  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:25.097035  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:25.122437  604010 cri.go:89] found id: ""
	I1213 11:53:25.122470  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.122480  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:25.122486  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:25.122584  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:25.148962  604010 cri.go:89] found id: ""
	I1213 11:53:25.148988  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.148997  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:25.149003  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:25.149074  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:25.181633  604010 cri.go:89] found id: ""
	I1213 11:53:25.181655  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.181664  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:25.181670  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:25.181732  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:25.212760  604010 cri.go:89] found id: ""
	I1213 11:53:25.212782  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.212790  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:25.212799  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:25.212811  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:25.276581  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:25.268697    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.269118    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.270651    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.271026    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.272496    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:25.268697    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.269118    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.270651    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.271026    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.272496    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:25.276603  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:25.276616  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:25.302726  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:25.302763  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:25.334110  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:25.334183  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:25.390064  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:25.390100  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:26.504848  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:53:26.566930  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:53:26.567035  604010 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 11:53:27.907342  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:27.919244  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:27.919322  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:27.953618  604010 cri.go:89] found id: ""
	I1213 11:53:27.953646  604010 logs.go:282] 0 containers: []
	W1213 11:53:27.953656  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:27.953662  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:27.953732  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:27.983451  604010 cri.go:89] found id: ""
	I1213 11:53:27.983474  604010 logs.go:282] 0 containers: []
	W1213 11:53:27.983483  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:27.983494  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:27.983553  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:28.015089  604010 cri.go:89] found id: ""
	I1213 11:53:28.015124  604010 logs.go:282] 0 containers: []
	W1213 11:53:28.015133  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:28.015141  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:28.015206  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:28.040741  604010 cri.go:89] found id: ""
	I1213 11:53:28.040764  604010 logs.go:282] 0 containers: []
	W1213 11:53:28.040773  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:28.040780  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:28.040847  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:28.066994  604010 cri.go:89] found id: ""
	I1213 11:53:28.067023  604010 logs.go:282] 0 containers: []
	W1213 11:53:28.067032  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:28.067039  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:28.067100  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:28.096788  604010 cri.go:89] found id: ""
	I1213 11:53:28.096819  604010 logs.go:282] 0 containers: []
	W1213 11:53:28.096828  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:28.096835  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:28.096896  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:28.124766  604010 cri.go:89] found id: ""
	I1213 11:53:28.124789  604010 logs.go:282] 0 containers: []
	W1213 11:53:28.124798  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:28.124804  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:28.124873  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:28.159549  604010 cri.go:89] found id: ""
	I1213 11:53:28.159577  604010 logs.go:282] 0 containers: []
	W1213 11:53:28.159585  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:28.159594  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:28.159606  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:28.199573  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:28.199603  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:28.270740  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:28.270789  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:28.287502  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:28.287532  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:28.351364  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:28.343352    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.343924    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.345385    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.345783    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.347266    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:28.343352    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.343924    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.345385    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.345783    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.347266    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:28.351388  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:28.351401  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:30.876922  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:30.887774  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:30.887849  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:30.923850  604010 cri.go:89] found id: ""
	I1213 11:53:30.923878  604010 logs.go:282] 0 containers: []
	W1213 11:53:30.923887  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:30.923893  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:30.923952  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:30.951470  604010 cri.go:89] found id: ""
	I1213 11:53:30.951498  604010 logs.go:282] 0 containers: []
	W1213 11:53:30.951507  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:30.951513  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:30.951570  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:30.984618  604010 cri.go:89] found id: ""
	I1213 11:53:30.984644  604010 logs.go:282] 0 containers: []
	W1213 11:53:30.984653  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:30.984659  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:30.984718  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:31.013958  604010 cri.go:89] found id: ""
	I1213 11:53:31.013986  604010 logs.go:282] 0 containers: []
	W1213 11:53:31.013994  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:31.014001  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:31.014062  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:31.039624  604010 cri.go:89] found id: ""
	I1213 11:53:31.039651  604010 logs.go:282] 0 containers: []
	W1213 11:53:31.039661  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:31.039668  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:31.039735  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:31.065442  604010 cri.go:89] found id: ""
	I1213 11:53:31.065471  604010 logs.go:282] 0 containers: []
	W1213 11:53:31.065480  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:31.065526  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:31.065591  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:31.093987  604010 cri.go:89] found id: ""
	I1213 11:53:31.094012  604010 logs.go:282] 0 containers: []
	W1213 11:53:31.094022  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:31.094028  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:31.094092  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:31.120512  604010 cri.go:89] found id: ""
	I1213 11:53:31.120536  604010 logs.go:282] 0 containers: []
	W1213 11:53:31.120545  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:31.120555  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:31.120568  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:31.193061  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:31.184276    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.185271    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.187099    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.187409    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.188923    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:31.184276    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.185271    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.187099    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.187409    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.188923    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:31.193086  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:31.193099  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:31.222013  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:31.222046  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:31.251352  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:31.251380  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:31.307515  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:31.307558  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:32.782865  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:53:32.843769  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:53:32.843886  604010 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 11:53:33.825081  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:33.836405  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:33.836483  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:33.862074  604010 cri.go:89] found id: ""
	I1213 11:53:33.862097  604010 logs.go:282] 0 containers: []
	W1213 11:53:33.862108  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:33.862114  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:33.862174  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:33.887847  604010 cri.go:89] found id: ""
	I1213 11:53:33.887872  604010 logs.go:282] 0 containers: []
	W1213 11:53:33.887881  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:33.887888  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:33.887953  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:33.922816  604010 cri.go:89] found id: ""
	I1213 11:53:33.922839  604010 logs.go:282] 0 containers: []
	W1213 11:53:33.922847  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:33.922854  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:33.922912  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:33.956255  604010 cri.go:89] found id: ""
	I1213 11:53:33.956278  604010 logs.go:282] 0 containers: []
	W1213 11:53:33.956286  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:33.956296  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:33.956357  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:33.988633  604010 cri.go:89] found id: ""
	I1213 11:53:33.988660  604010 logs.go:282] 0 containers: []
	W1213 11:53:33.988668  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:33.988675  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:33.988734  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:34.016574  604010 cri.go:89] found id: ""
	I1213 11:53:34.016600  604010 logs.go:282] 0 containers: []
	W1213 11:53:34.016610  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:34.016618  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:34.016688  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:34.047246  604010 cri.go:89] found id: ""
	I1213 11:53:34.047274  604010 logs.go:282] 0 containers: []
	W1213 11:53:34.047283  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:34.047290  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:34.047351  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:34.073767  604010 cri.go:89] found id: ""
	I1213 11:53:34.073791  604010 logs.go:282] 0 containers: []
	W1213 11:53:34.073801  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:34.073810  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:34.073821  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:34.142086  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:34.142126  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:34.160135  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:34.160221  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:34.242780  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:34.234520    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.235116    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.236649    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.237063    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.238589    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:34.234520    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.235116    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.236649    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.237063    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.238589    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:34.242803  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:34.242817  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:34.268944  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:34.268981  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:36.800525  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:36.813555  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:36.813631  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:36.838503  604010 cri.go:89] found id: ""
	I1213 11:53:36.838530  604010 logs.go:282] 0 containers: []
	W1213 11:53:36.838539  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:36.838546  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:36.838610  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:36.863532  604010 cri.go:89] found id: ""
	I1213 11:53:36.863553  604010 logs.go:282] 0 containers: []
	W1213 11:53:36.863562  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:36.863569  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:36.863629  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:36.888886  604010 cri.go:89] found id: ""
	I1213 11:53:36.888912  604010 logs.go:282] 0 containers: []
	W1213 11:53:36.888920  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:36.888926  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:36.888992  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:36.917481  604010 cri.go:89] found id: ""
	I1213 11:53:36.917566  604010 logs.go:282] 0 containers: []
	W1213 11:53:36.917589  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:36.917608  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:36.917708  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:36.951605  604010 cri.go:89] found id: ""
	I1213 11:53:36.951676  604010 logs.go:282] 0 containers: []
	W1213 11:53:36.951698  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:36.951716  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:36.951808  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:36.980776  604010 cri.go:89] found id: ""
	I1213 11:53:36.980798  604010 logs.go:282] 0 containers: []
	W1213 11:53:36.980807  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:36.980814  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:36.980878  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:37.014102  604010 cri.go:89] found id: ""
	I1213 11:53:37.014129  604010 logs.go:282] 0 containers: []
	W1213 11:53:37.014139  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:37.014146  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:37.014218  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:37.041045  604010 cri.go:89] found id: ""
	I1213 11:53:37.041068  604010 logs.go:282] 0 containers: []
	W1213 11:53:37.041076  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:37.041086  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:37.041099  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:37.057607  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:37.057677  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:37.123513  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:37.114613    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.115389    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.117143    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.117811    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.119588    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:37.114613    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.115389    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.117143    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.117811    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.119588    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:37.123585  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:37.123612  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:37.149745  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:37.149782  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:37.190123  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:37.190160  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:39.753400  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:39.766329  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:39.766428  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:39.794895  604010 cri.go:89] found id: ""
	I1213 11:53:39.794979  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.794995  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:39.795003  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:39.795077  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:39.819418  604010 cri.go:89] found id: ""
	I1213 11:53:39.819444  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.819453  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:39.819462  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:39.819522  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:39.847949  604010 cri.go:89] found id: ""
	I1213 11:53:39.847976  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.847985  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:39.847992  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:39.848064  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:39.872978  604010 cri.go:89] found id: ""
	I1213 11:53:39.873009  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.873018  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:39.873025  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:39.873091  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:39.900210  604010 cri.go:89] found id: ""
	I1213 11:53:39.900236  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.900245  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:39.900252  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:39.900311  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:39.934251  604010 cri.go:89] found id: ""
	I1213 11:53:39.934276  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.934285  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:39.934291  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:39.934351  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:39.964389  604010 cri.go:89] found id: ""
	I1213 11:53:39.964416  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.964425  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:39.964431  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:39.964496  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:39.995412  604010 cri.go:89] found id: ""
	I1213 11:53:39.995435  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.995444  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:39.995454  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:39.995466  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:40.074600  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:40.074644  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:40.093065  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:40.093143  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:40.162566  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:40.153392    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.154048    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.155849    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.156585    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.158356    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:40.153392    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.154048    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.155849    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.156585    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.158356    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:40.162633  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:40.162659  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:40.191469  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:40.191548  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:42.738325  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:42.749369  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:42.749435  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:42.776660  604010 cri.go:89] found id: ""
	I1213 11:53:42.776686  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.776695  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:42.776701  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:42.776761  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:42.802014  604010 cri.go:89] found id: ""
	I1213 11:53:42.802042  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.802051  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:42.802057  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:42.802116  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:42.826554  604010 cri.go:89] found id: ""
	I1213 11:53:42.826583  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.826592  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:42.826598  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:42.826659  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:42.853269  604010 cri.go:89] found id: ""
	I1213 11:53:42.853296  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.853305  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:42.853319  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:42.853384  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:42.880122  604010 cri.go:89] found id: ""
	I1213 11:53:42.880150  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.880159  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:42.880166  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:42.880227  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:42.904811  604010 cri.go:89] found id: ""
	I1213 11:53:42.904834  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.904843  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:42.904850  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:42.904908  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:42.930715  604010 cri.go:89] found id: ""
	I1213 11:53:42.930744  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.930753  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:42.930759  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:42.930815  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:42.964092  604010 cri.go:89] found id: ""
	I1213 11:53:42.964115  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.964123  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:42.964132  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:42.964144  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:42.994219  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:42.994254  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:43.031007  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:43.031036  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:43.086377  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:43.086412  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:43.103185  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:43.103216  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:43.180526  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:43.171640    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.172414    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.174057    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.174649    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.176278    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:43.171640    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.172414    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.174057    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.174649    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.176278    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:45.681512  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:45.691980  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:45.692050  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:45.720468  604010 cri.go:89] found id: ""
	I1213 11:53:45.720494  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.720503  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:45.720509  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:45.720566  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:45.745270  604010 cri.go:89] found id: ""
	I1213 11:53:45.745297  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.745305  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:45.745312  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:45.745371  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:45.771959  604010 cri.go:89] found id: ""
	I1213 11:53:45.771989  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.771998  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:45.772005  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:45.772063  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:45.797561  604010 cri.go:89] found id: ""
	I1213 11:53:45.797588  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.797597  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:45.797604  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:45.797666  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:45.821937  604010 cri.go:89] found id: ""
	I1213 11:53:45.821965  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.821975  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:45.821981  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:45.822041  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:45.854390  604010 cri.go:89] found id: ""
	I1213 11:53:45.854414  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.854423  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:45.854430  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:45.854489  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:45.879570  604010 cri.go:89] found id: ""
	I1213 11:53:45.879597  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.879616  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:45.879623  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:45.879681  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:45.904307  604010 cri.go:89] found id: ""
	I1213 11:53:45.904335  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.904344  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:45.904354  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:45.904364  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:45.971467  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:45.971554  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:45.988842  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:45.988868  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:46.054484  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:46.046672    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.047076    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.048668    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.049161    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.050614    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:46.046672    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.047076    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.048668    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.049161    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.050614    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:46.054553  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:46.054579  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:46.079997  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:46.080032  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:48.608207  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:48.618848  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:48.618926  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:48.644320  604010 cri.go:89] found id: ""
	I1213 11:53:48.644344  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.644352  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:48.644359  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:48.644420  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:48.669194  604010 cri.go:89] found id: ""
	I1213 11:53:48.669226  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.669236  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:48.669242  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:48.669308  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:48.694072  604010 cri.go:89] found id: ""
	I1213 11:53:48.694097  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.694107  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:48.694113  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:48.694188  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:48.718654  604010 cri.go:89] found id: ""
	I1213 11:53:48.718679  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.718720  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:48.718727  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:48.718800  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:48.742539  604010 cri.go:89] found id: ""
	I1213 11:53:48.742571  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.742580  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:48.742587  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:48.742660  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:48.771087  604010 cri.go:89] found id: ""
	I1213 11:53:48.771111  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.771120  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:48.771126  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:48.771185  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:48.797732  604010 cri.go:89] found id: ""
	I1213 11:53:48.797755  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.797764  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:48.797770  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:48.797834  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:48.822805  604010 cri.go:89] found id: ""
	I1213 11:53:48.822830  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.822839  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:48.822849  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:48.822860  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:48.879446  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:48.879514  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:48.895910  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:48.895938  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:48.987206  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:48.978941    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.979739    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.981488    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.981826    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.983267    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:48.978941    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.979739    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.981488    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.981826    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.983267    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:48.987238  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:48.987251  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:49.014114  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:49.014150  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:50.175475  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:53:50.239481  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:53:50.239579  604010 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 11:53:50.242787  604010 out.go:179] * Enabled addons: 
	I1213 11:53:50.245448  604010 addons.go:530] duration metric: took 1m54.618181483s for enable addons: enabled=[]
	I1213 11:53:51.543477  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:51.554449  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:51.554521  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:51.579307  604010 cri.go:89] found id: ""
	I1213 11:53:51.579335  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.579344  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:51.579350  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:51.579411  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:51.605002  604010 cri.go:89] found id: ""
	I1213 11:53:51.605029  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.605040  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:51.605047  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:51.605108  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:51.629728  604010 cri.go:89] found id: ""
	I1213 11:53:51.629761  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.629770  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:51.629777  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:51.629840  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:51.656823  604010 cri.go:89] found id: ""
	I1213 11:53:51.656846  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.656855  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:51.656862  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:51.656919  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:51.684689  604010 cri.go:89] found id: ""
	I1213 11:53:51.684712  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.684721  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:51.684728  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:51.684787  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:51.709741  604010 cri.go:89] found id: ""
	I1213 11:53:51.709768  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.709776  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:51.709784  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:51.709895  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:51.735821  604010 cri.go:89] found id: ""
	I1213 11:53:51.735848  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.735857  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:51.735863  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:51.735922  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:51.765085  604010 cri.go:89] found id: ""
	I1213 11:53:51.765111  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.765120  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:51.765130  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:51.765143  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:51.820951  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:51.820986  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:51.837298  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:51.837448  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:51.903778  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:51.894875    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.895698    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.897404    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.897825    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.899293    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:51.894875    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.895698    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.897404    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.897825    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.899293    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:51.903855  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:51.903876  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:51.931477  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:51.931561  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:54.461061  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:54.471768  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:54.471839  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:54.497629  604010 cri.go:89] found id: ""
	I1213 11:53:54.497651  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.497660  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:54.497666  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:54.497725  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:54.523805  604010 cri.go:89] found id: ""
	I1213 11:53:54.523830  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.523839  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:54.523846  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:54.523905  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:54.548988  604010 cri.go:89] found id: ""
	I1213 11:53:54.549012  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.549021  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:54.549027  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:54.549089  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:54.584912  604010 cri.go:89] found id: ""
	I1213 11:53:54.584996  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.585012  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:54.585020  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:54.585094  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:54.613768  604010 cri.go:89] found id: ""
	I1213 11:53:54.613810  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.613822  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:54.613832  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:54.613917  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:54.638498  604010 cri.go:89] found id: ""
	I1213 11:53:54.638523  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.638531  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:54.638539  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:54.638597  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:54.663796  604010 cri.go:89] found id: ""
	I1213 11:53:54.663863  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.663886  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:54.663904  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:54.663994  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:54.688512  604010 cri.go:89] found id: ""
	I1213 11:53:54.688595  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.688612  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:54.688623  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:54.688635  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:54.745122  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:54.745158  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:54.761471  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:54.761502  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:54.827485  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:54.818964    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.819562    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.821065    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.821615    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.823257    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:54.818964    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.819562    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.821065    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.821615    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.823257    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:54.827506  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:54.827519  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:54.853348  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:54.853383  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:57.386439  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:57.396996  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:57.397067  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:57.432425  604010 cri.go:89] found id: ""
	I1213 11:53:57.432451  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.432461  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:57.432468  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:57.432531  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:57.468740  604010 cri.go:89] found id: ""
	I1213 11:53:57.468767  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.468777  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:57.468783  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:57.468848  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:57.496008  604010 cri.go:89] found id: ""
	I1213 11:53:57.496032  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.496041  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:57.496053  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:57.496113  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:57.522430  604010 cri.go:89] found id: ""
	I1213 11:53:57.522454  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.522463  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:57.522469  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:57.522528  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:57.547956  604010 cri.go:89] found id: ""
	I1213 11:53:57.547980  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.547988  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:57.547994  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:57.548054  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:57.573554  604010 cri.go:89] found id: ""
	I1213 11:53:57.573579  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.573589  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:57.573596  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:57.573658  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:57.597400  604010 cri.go:89] found id: ""
	I1213 11:53:57.597428  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.597437  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:57.597443  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:57.597501  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:57.621599  604010 cri.go:89] found id: ""
	I1213 11:53:57.621623  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.621632  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:57.621642  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:57.621653  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:57.677116  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:57.677153  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:57.692856  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:57.692929  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:57.758229  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:57.748721    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.749368    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.751042    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.751857    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.753632    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:57.748721    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.749368    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.751042    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.751857    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.753632    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:57.758252  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:57.758266  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:57.784520  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:57.784560  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:00.317292  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:00.352525  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:00.352620  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:00.392603  604010 cri.go:89] found id: ""
	I1213 11:54:00.392636  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.392646  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:00.392654  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:00.392736  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:00.447117  604010 cri.go:89] found id: ""
	I1213 11:54:00.447149  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.447158  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:00.447178  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:00.447281  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:00.479294  604010 cri.go:89] found id: ""
	I1213 11:54:00.479324  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.479333  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:00.479339  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:00.479406  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:00.510064  604010 cri.go:89] found id: ""
	I1213 11:54:00.510092  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.510101  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:00.510108  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:00.510184  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:00.537774  604010 cri.go:89] found id: ""
	I1213 11:54:00.537801  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.537810  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:00.537816  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:00.537877  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:00.563430  604010 cri.go:89] found id: ""
	I1213 11:54:00.563460  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.563469  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:00.563475  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:00.563534  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:00.588470  604010 cri.go:89] found id: ""
	I1213 11:54:00.588495  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.588503  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:00.588510  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:00.588573  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:00.616819  604010 cri.go:89] found id: ""
	I1213 11:54:00.616853  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.616865  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:00.616874  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:00.616887  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:00.632810  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:00.632837  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:00.697200  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:00.688095    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.688902    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.690382    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.690873    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.692718    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:00.688095    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.688902    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.690382    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.690873    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.692718    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:00.697225  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:00.697239  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:00.722351  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:00.722391  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:00.753453  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:00.753489  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:03.309839  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:03.321093  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:03.321163  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:03.349567  604010 cri.go:89] found id: ""
	I1213 11:54:03.349591  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.349600  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:03.349607  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:03.349667  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:03.374734  604010 cri.go:89] found id: ""
	I1213 11:54:03.374758  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.374767  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:03.374774  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:03.374842  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:03.400074  604010 cri.go:89] found id: ""
	I1213 11:54:03.400099  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.400108  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:03.400114  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:03.400172  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:03.461432  604010 cri.go:89] found id: ""
	I1213 11:54:03.461533  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.461561  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:03.461583  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:03.461673  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:03.504466  604010 cri.go:89] found id: ""
	I1213 11:54:03.504544  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.504566  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:03.504585  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:03.504671  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:03.545459  604010 cri.go:89] found id: ""
	I1213 11:54:03.545482  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.545491  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:03.545497  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:03.545575  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:03.570446  604010 cri.go:89] found id: ""
	I1213 11:54:03.570468  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.570476  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:03.570482  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:03.570539  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:03.595001  604010 cri.go:89] found id: ""
	I1213 11:54:03.595023  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.595031  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:03.595041  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:03.595057  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:03.610922  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:03.610955  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:03.679130  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:03.671134    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.671746    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.673204    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.673644    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.675078    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:03.671134    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.671746    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.673204    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.673644    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.675078    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:03.679152  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:03.679167  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:03.705484  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:03.705522  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:03.732753  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:03.732778  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:06.289051  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:06.299935  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:06.300031  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:06.325745  604010 cri.go:89] found id: ""
	I1213 11:54:06.325777  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.325787  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:06.325794  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:06.325898  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:06.352273  604010 cri.go:89] found id: ""
	I1213 11:54:06.352342  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.352357  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:06.352365  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:06.352437  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:06.376413  604010 cri.go:89] found id: ""
	I1213 11:54:06.376482  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.376507  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:06.376520  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:06.376596  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:06.406144  604010 cri.go:89] found id: ""
	I1213 11:54:06.406188  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.406198  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:06.406206  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:06.406285  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:06.456311  604010 cri.go:89] found id: ""
	I1213 11:54:06.456388  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.456411  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:06.456430  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:06.456526  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:06.510060  604010 cri.go:89] found id: ""
	I1213 11:54:06.510150  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.510174  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:06.510194  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:06.510310  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:06.542373  604010 cri.go:89] found id: ""
	I1213 11:54:06.542450  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.542472  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:06.542494  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:06.542601  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:06.567983  604010 cri.go:89] found id: ""
	I1213 11:54:06.568063  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.568087  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:06.568104  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:06.568129  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:06.624463  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:06.624498  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:06.640970  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:06.641003  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:06.714019  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:06.704918    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.705767    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.706758    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.708430    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.708734    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:06.704918    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.705767    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.706758    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.708430    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.708734    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:06.714096  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:06.714117  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:06.739708  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:06.739748  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:09.268501  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:09.279334  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:09.279413  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:09.308998  604010 cri.go:89] found id: ""
	I1213 11:54:09.309034  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.309043  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:09.309050  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:09.309110  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:09.336921  604010 cri.go:89] found id: ""
	I1213 11:54:09.336947  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.336956  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:09.336963  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:09.337025  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:09.367100  604010 cri.go:89] found id: ""
	I1213 11:54:09.367123  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.367131  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:09.367138  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:09.367196  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:09.392881  604010 cri.go:89] found id: ""
	I1213 11:54:09.392913  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.392922  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:09.392930  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:09.392991  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:09.433300  604010 cri.go:89] found id: ""
	I1213 11:54:09.433330  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.433339  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:09.433345  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:09.433408  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:09.499329  604010 cri.go:89] found id: ""
	I1213 11:54:09.499357  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.499365  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:09.499372  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:09.499434  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:09.526348  604010 cri.go:89] found id: ""
	I1213 11:54:09.526383  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.526392  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:09.526399  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:09.526467  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:09.551552  604010 cri.go:89] found id: ""
	I1213 11:54:09.551585  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.551595  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:09.551605  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:09.551617  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:09.607976  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:09.608011  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:09.624198  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:09.624228  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:09.692042  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:09.683184    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.683833    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.685650    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.686276    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.688111    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:09.683184    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.683833    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.685650    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.686276    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.688111    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:09.692065  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:09.692077  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:09.717762  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:09.717799  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:12.251306  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:12.261889  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:12.261958  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:12.286128  604010 cri.go:89] found id: ""
	I1213 11:54:12.286151  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.286160  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:12.286166  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:12.286231  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:12.320955  604010 cri.go:89] found id: ""
	I1213 11:54:12.320982  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.320992  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:12.320999  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:12.321064  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:12.347366  604010 cri.go:89] found id: ""
	I1213 11:54:12.347394  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.347404  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:12.347411  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:12.347475  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:12.372047  604010 cri.go:89] found id: ""
	I1213 11:54:12.372075  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.372084  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:12.372091  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:12.372211  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:12.397441  604010 cri.go:89] found id: ""
	I1213 11:54:12.397466  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.397475  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:12.397482  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:12.397610  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:12.458383  604010 cri.go:89] found id: ""
	I1213 11:54:12.458464  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.458487  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:12.458505  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:12.458610  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:12.499011  604010 cri.go:89] found id: ""
	I1213 11:54:12.499087  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.499110  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:12.499128  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:12.499223  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:12.526019  604010 cri.go:89] found id: ""
	I1213 11:54:12.526048  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.526058  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:12.526068  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:12.526079  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:12.582388  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:12.582425  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:12.598760  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:12.598788  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:12.668226  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:12.659694    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.660116    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.661902    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.662352    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.663961    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:12.659694    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.660116    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.661902    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.662352    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.663961    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:12.668250  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:12.668263  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:12.698476  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:12.698514  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:15.226309  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:15.237066  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:15.237138  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:15.261808  604010 cri.go:89] found id: ""
	I1213 11:54:15.261836  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.261845  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:15.261851  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:15.261912  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:15.286942  604010 cri.go:89] found id: ""
	I1213 11:54:15.286966  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.286975  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:15.286981  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:15.287066  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:15.311813  604010 cri.go:89] found id: ""
	I1213 11:54:15.311842  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.311852  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:15.311859  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:15.311920  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:15.341088  604010 cri.go:89] found id: ""
	I1213 11:54:15.341116  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.341124  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:15.341131  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:15.341188  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:15.365220  604010 cri.go:89] found id: ""
	I1213 11:54:15.365247  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.365256  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:15.365263  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:15.365319  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:15.389056  604010 cri.go:89] found id: ""
	I1213 11:54:15.389084  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.389093  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:15.389099  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:15.389159  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:15.424168  604010 cri.go:89] found id: ""
	I1213 11:54:15.424197  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.424206  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:15.424215  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:15.424275  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:15.458977  604010 cri.go:89] found id: ""
	I1213 11:54:15.459014  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.459023  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:15.459033  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:15.459045  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:15.488624  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:15.488665  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:15.534272  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:15.534300  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:15.593055  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:15.593092  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:15.609340  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:15.609370  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:15.673503  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:15.664722    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.665497    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.667260    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.667958    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.669529    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:15.664722    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.665497    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.667260    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.667958    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.669529    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:18.175202  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:18.185611  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:18.185684  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:18.216571  604010 cri.go:89] found id: ""
	I1213 11:54:18.216598  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.216609  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:18.216616  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:18.216676  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:18.244020  604010 cri.go:89] found id: ""
	I1213 11:54:18.244044  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.244053  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:18.244060  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:18.244125  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:18.269644  604010 cri.go:89] found id: ""
	I1213 11:54:18.269677  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.269686  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:18.269699  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:18.269759  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:18.295049  604010 cri.go:89] found id: ""
	I1213 11:54:18.295074  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.295084  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:18.295092  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:18.295151  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:18.319970  604010 cri.go:89] found id: ""
	I1213 11:54:18.319994  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.320003  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:18.320009  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:18.320068  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:18.348557  604010 cri.go:89] found id: ""
	I1213 11:54:18.348583  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.348591  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:18.348598  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:18.348661  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:18.372733  604010 cri.go:89] found id: ""
	I1213 11:54:18.372759  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.372769  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:18.372775  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:18.372833  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:18.397904  604010 cri.go:89] found id: ""
	I1213 11:54:18.397927  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.397936  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:18.397945  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:18.397958  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:18.475145  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:18.475177  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:18.509115  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:18.509140  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:18.578046  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:18.568558    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.569407    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.571224    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.571849    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.573663    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:18.568558    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.569407    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.571224    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.571849    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.573663    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:18.578069  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:18.578080  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:18.604022  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:18.604057  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:21.135717  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:21.151653  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:21.151722  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:21.181267  604010 cri.go:89] found id: ""
	I1213 11:54:21.181292  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.181300  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:21.181306  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:21.181363  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:21.211036  604010 cri.go:89] found id: ""
	I1213 11:54:21.211064  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.211073  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:21.211079  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:21.211136  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:21.235057  604010 cri.go:89] found id: ""
	I1213 11:54:21.235082  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.235091  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:21.235097  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:21.235158  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:21.259604  604010 cri.go:89] found id: ""
	I1213 11:54:21.259629  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.259637  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:21.259644  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:21.259710  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:21.284921  604010 cri.go:89] found id: ""
	I1213 11:54:21.284948  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.284957  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:21.284963  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:21.285022  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:21.311134  604010 cri.go:89] found id: ""
	I1213 11:54:21.311162  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.311171  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:21.311178  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:21.311238  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:21.337100  604010 cri.go:89] found id: ""
	I1213 11:54:21.337124  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.337133  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:21.337140  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:21.337201  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:21.361945  604010 cri.go:89] found id: ""
	I1213 11:54:21.361969  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.361977  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:21.361987  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:21.362001  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:21.424925  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:21.424964  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:21.442370  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:21.442449  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:21.544421  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:21.527951    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.529143    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.530038    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.535082    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.535420    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:21.527951    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.529143    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.530038    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.535082    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.535420    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:21.544487  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:21.544508  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:21.569861  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:21.569899  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:24.098574  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:24.109255  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:24.109328  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:24.135881  604010 cri.go:89] found id: ""
	I1213 11:54:24.135904  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.135913  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:24.135919  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:24.135976  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:24.160249  604010 cri.go:89] found id: ""
	I1213 11:54:24.160272  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.160281  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:24.160294  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:24.160356  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:24.185097  604010 cri.go:89] found id: ""
	I1213 11:54:24.185120  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.185129  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:24.185136  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:24.185197  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:24.210052  604010 cri.go:89] found id: ""
	I1213 11:54:24.210133  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.210156  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:24.210174  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:24.210263  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:24.234868  604010 cri.go:89] found id: ""
	I1213 11:54:24.234895  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.234905  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:24.234912  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:24.234968  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:24.258998  604010 cri.go:89] found id: ""
	I1213 11:54:24.259023  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.259032  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:24.259039  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:24.259099  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:24.282644  604010 cri.go:89] found id: ""
	I1213 11:54:24.282672  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.282713  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:24.282721  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:24.282780  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:24.312793  604010 cri.go:89] found id: ""
	I1213 11:54:24.312822  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.312831  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:24.312841  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:24.312853  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:24.328614  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:24.328643  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:24.398953  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:24.390748    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.391466    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.392548    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.393304    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.394893    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:24.390748    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.391466    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.392548    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.393304    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.394893    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:24.398978  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:24.398992  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:24.447276  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:24.447353  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:24.512358  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:24.512384  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:27.079756  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:27.090085  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:27.090157  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:27.114934  604010 cri.go:89] found id: ""
	I1213 11:54:27.114957  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.114966  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:27.114972  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:27.115032  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:27.139399  604010 cri.go:89] found id: ""
	I1213 11:54:27.139424  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.139433  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:27.139439  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:27.139496  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:27.164348  604010 cri.go:89] found id: ""
	I1213 11:54:27.164371  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.164379  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:27.164385  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:27.164443  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:27.189263  604010 cri.go:89] found id: ""
	I1213 11:54:27.189286  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.189294  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:27.189302  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:27.189362  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:27.214003  604010 cri.go:89] found id: ""
	I1213 11:54:27.214076  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.214101  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:27.214121  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:27.214204  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:27.238568  604010 cri.go:89] found id: ""
	I1213 11:54:27.238632  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.238657  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:27.238675  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:27.238861  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:27.263827  604010 cri.go:89] found id: ""
	I1213 11:54:27.263850  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.263858  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:27.263864  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:27.263941  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:27.293643  604010 cri.go:89] found id: ""
	I1213 11:54:27.293672  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.293680  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:27.293691  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:27.293706  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:27.353462  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:27.353498  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:27.369639  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:27.369723  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:27.462957  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:27.448639    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.449130    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.455578    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.456379    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.459064    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:27.448639    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.449130    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.455578    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.456379    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.459064    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:27.462984  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:27.463007  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:27.502080  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:27.502115  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:30.033979  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:30.048817  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:30.048921  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:30.086312  604010 cri.go:89] found id: ""
	I1213 11:54:30.086343  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.086353  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:30.086361  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:30.086431  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:30.118027  604010 cri.go:89] found id: ""
	I1213 11:54:30.118056  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.118066  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:30.118073  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:30.118139  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:30.150398  604010 cri.go:89] found id: ""
	I1213 11:54:30.150422  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.150431  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:30.150437  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:30.150501  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:30.176994  604010 cri.go:89] found id: ""
	I1213 11:54:30.177024  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.177033  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:30.177040  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:30.177102  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:30.204667  604010 cri.go:89] found id: ""
	I1213 11:54:30.204692  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.204702  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:30.204709  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:30.204768  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:30.233311  604010 cri.go:89] found id: ""
	I1213 11:54:30.233340  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.233350  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:30.233357  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:30.233443  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:30.258722  604010 cri.go:89] found id: ""
	I1213 11:54:30.258749  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.258759  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:30.258766  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:30.258828  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:30.284738  604010 cri.go:89] found id: ""
	I1213 11:54:30.284766  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.284775  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:30.284785  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:30.284797  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:30.352842  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:30.344108    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.344689    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.346232    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.346735    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.348264    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:30.344108    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.344689    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.346232    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.346735    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.348264    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:30.352861  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:30.352873  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:30.377958  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:30.377993  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:30.409746  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:30.409777  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:30.497989  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:30.498042  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:33.019623  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:33.030945  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:33.031018  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:33.060411  604010 cri.go:89] found id: ""
	I1213 11:54:33.060436  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.060445  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:33.060452  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:33.060514  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:33.085659  604010 cri.go:89] found id: ""
	I1213 11:54:33.085684  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.085693  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:33.085700  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:33.085762  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:33.110577  604010 cri.go:89] found id: ""
	I1213 11:54:33.110603  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.110612  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:33.110618  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:33.110676  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:33.140224  604010 cri.go:89] found id: ""
	I1213 11:54:33.140252  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.140261  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:33.140267  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:33.140328  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:33.165441  604010 cri.go:89] found id: ""
	I1213 11:54:33.165467  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.165477  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:33.165483  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:33.165574  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:33.191299  604010 cri.go:89] found id: ""
	I1213 11:54:33.191324  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.191332  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:33.191339  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:33.191400  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:33.216285  604010 cri.go:89] found id: ""
	I1213 11:54:33.216311  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.216320  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:33.216327  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:33.216386  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:33.241156  604010 cri.go:89] found id: ""
	I1213 11:54:33.241180  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.241189  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:33.241199  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:33.241210  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:33.269984  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:33.270014  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:33.326746  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:33.326782  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:33.343845  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:33.343874  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:33.421478  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:33.403624    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.404936    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.405920    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.407713    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.408279    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:33.403624    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.404936    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.405920    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.407713    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.408279    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:33.421564  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:33.421594  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:35.956688  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:35.967776  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:35.967847  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:35.992715  604010 cri.go:89] found id: ""
	I1213 11:54:35.992745  604010 logs.go:282] 0 containers: []
	W1213 11:54:35.992753  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:35.992760  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:35.992821  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:36.030819  604010 cri.go:89] found id: ""
	I1213 11:54:36.030854  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.030864  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:36.030870  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:36.030940  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:36.056512  604010 cri.go:89] found id: ""
	I1213 11:54:36.056537  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.056547  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:36.056553  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:36.056613  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:36.083355  604010 cri.go:89] found id: ""
	I1213 11:54:36.083381  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.083390  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:36.083397  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:36.083458  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:36.109765  604010 cri.go:89] found id: ""
	I1213 11:54:36.109791  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.109799  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:36.109806  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:36.109866  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:36.139001  604010 cri.go:89] found id: ""
	I1213 11:54:36.139030  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.139040  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:36.139048  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:36.139109  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:36.164252  604010 cri.go:89] found id: ""
	I1213 11:54:36.164280  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.164290  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:36.164297  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:36.164419  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:36.193554  604010 cri.go:89] found id: ""
	I1213 11:54:36.193579  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.193588  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:36.193597  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:36.193609  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:36.225514  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:36.225555  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:36.284505  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:36.284551  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:36.300602  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:36.300632  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:36.368620  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:36.358956    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.360036    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.361784    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.362389    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.364078    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:36.358956    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.360036    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.361784    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.362389    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.364078    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:36.368642  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:36.368654  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:38.894313  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:38.906401  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:38.906478  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:38.931173  604010 cri.go:89] found id: ""
	I1213 11:54:38.931200  604010 logs.go:282] 0 containers: []
	W1213 11:54:38.931210  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:38.931217  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:38.931280  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:38.957289  604010 cri.go:89] found id: ""
	I1213 11:54:38.957315  604010 logs.go:282] 0 containers: []
	W1213 11:54:38.957324  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:38.957330  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:38.957391  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:38.984282  604010 cri.go:89] found id: ""
	I1213 11:54:38.984307  604010 logs.go:282] 0 containers: []
	W1213 11:54:38.984317  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:38.984323  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:38.984402  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:39.012924  604010 cri.go:89] found id: ""
	I1213 11:54:39.012994  604010 logs.go:282] 0 containers: []
	W1213 11:54:39.013012  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:39.013021  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:39.013085  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:39.039025  604010 cri.go:89] found id: ""
	I1213 11:54:39.039062  604010 logs.go:282] 0 containers: []
	W1213 11:54:39.039071  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:39.039077  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:39.039145  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:39.066984  604010 cri.go:89] found id: ""
	I1213 11:54:39.067009  604010 logs.go:282] 0 containers: []
	W1213 11:54:39.067018  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:39.067024  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:39.067088  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:39.093147  604010 cri.go:89] found id: ""
	I1213 11:54:39.093172  604010 logs.go:282] 0 containers: []
	W1213 11:54:39.093181  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:39.093188  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:39.093247  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:39.120841  604010 cri.go:89] found id: ""
	I1213 11:54:39.120866  604010 logs.go:282] 0 containers: []
	W1213 11:54:39.120875  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:39.120884  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:39.120896  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:39.177077  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:39.177113  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:39.193258  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:39.193284  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:39.255506  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:39.246949    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.247600    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.249297    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.249837    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.251408    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:39.246949    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.247600    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.249297    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.249837    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.251408    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:39.255531  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:39.255546  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:39.280959  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:39.280995  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:41.808371  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:41.820751  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:41.820829  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:41.847226  604010 cri.go:89] found id: ""
	I1213 11:54:41.847249  604010 logs.go:282] 0 containers: []
	W1213 11:54:41.847258  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:41.847264  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:41.847322  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:41.873405  604010 cri.go:89] found id: ""
	I1213 11:54:41.873436  604010 logs.go:282] 0 containers: []
	W1213 11:54:41.873448  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:41.873455  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:41.873519  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:41.899479  604010 cri.go:89] found id: ""
	I1213 11:54:41.899509  604010 logs.go:282] 0 containers: []
	W1213 11:54:41.899518  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:41.899524  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:41.899582  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:41.923515  604010 cri.go:89] found id: ""
	I1213 11:54:41.923545  604010 logs.go:282] 0 containers: []
	W1213 11:54:41.923554  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:41.923561  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:41.923621  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:41.952086  604010 cri.go:89] found id: ""
	I1213 11:54:41.952110  604010 logs.go:282] 0 containers: []
	W1213 11:54:41.952119  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:41.952125  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:41.952182  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:41.976613  604010 cri.go:89] found id: ""
	I1213 11:54:41.976637  604010 logs.go:282] 0 containers: []
	W1213 11:54:41.976646  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:41.976653  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:41.976714  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:42.010402  604010 cri.go:89] found id: ""
	I1213 11:54:42.010434  604010 logs.go:282] 0 containers: []
	W1213 11:54:42.010443  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:42.010450  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:42.010520  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:42.038928  604010 cri.go:89] found id: ""
	I1213 11:54:42.038955  604010 logs.go:282] 0 containers: []
	W1213 11:54:42.038964  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:42.038974  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:42.038985  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:42.096963  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:42.097004  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:42.115172  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:42.115213  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:42.192959  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:42.182320    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.183391    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.184373    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.186141    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.186781    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:42.182320    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.183391    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.184373    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.186141    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.186781    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:42.192981  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:42.192995  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:42.219986  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:42.220023  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:44.750998  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:44.761521  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:44.761601  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:44.785581  604010 cri.go:89] found id: ""
	I1213 11:54:44.785609  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.785618  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:44.785625  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:44.785681  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:44.810312  604010 cri.go:89] found id: ""
	I1213 11:54:44.810340  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.810349  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:44.810356  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:44.810419  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:44.834980  604010 cri.go:89] found id: ""
	I1213 11:54:44.835004  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.835012  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:44.835018  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:44.835082  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:44.868160  604010 cri.go:89] found id: ""
	I1213 11:54:44.868187  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.868196  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:44.868203  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:44.868263  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:44.893689  604010 cri.go:89] found id: ""
	I1213 11:54:44.893715  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.893723  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:44.893730  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:44.893788  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:44.918090  604010 cri.go:89] found id: ""
	I1213 11:54:44.918119  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.918128  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:44.918135  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:44.918196  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:44.944994  604010 cri.go:89] found id: ""
	I1213 11:54:44.945022  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.945032  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:44.945038  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:44.945102  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:44.969862  604010 cri.go:89] found id: ""
	I1213 11:54:44.969891  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.969900  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:44.969910  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:44.969921  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:45.027468  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:45.027521  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:45.054117  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:45.054213  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:45.178092  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:45.159739    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.160529    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.166319    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.166867    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.169009    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:45.159739    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.160529    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.166319    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.166867    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.169009    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:45.178126  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:45.178168  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:45.209407  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:45.209462  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:47.757891  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:47.768440  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:47.768511  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:47.797232  604010 cri.go:89] found id: ""
	I1213 11:54:47.797258  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.797267  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:47.797274  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:47.797331  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:47.822035  604010 cri.go:89] found id: ""
	I1213 11:54:47.822059  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.822068  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:47.822074  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:47.822139  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:47.850594  604010 cri.go:89] found id: ""
	I1213 11:54:47.850619  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.850627  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:47.850634  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:47.850715  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:47.875934  604010 cri.go:89] found id: ""
	I1213 11:54:47.875958  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.875967  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:47.875975  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:47.876036  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:47.904019  604010 cri.go:89] found id: ""
	I1213 11:54:47.904043  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.904051  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:47.904058  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:47.904122  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:47.928717  604010 cri.go:89] found id: ""
	I1213 11:54:47.928743  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.928751  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:47.928758  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:47.928818  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:47.953107  604010 cri.go:89] found id: ""
	I1213 11:54:47.953135  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.953144  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:47.953152  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:47.953228  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:47.977855  604010 cri.go:89] found id: ""
	I1213 11:54:47.977891  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.977900  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:47.977910  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:47.977940  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:48.033045  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:48.033085  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:48.049516  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:48.049571  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:48.119802  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:48.111384    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.112145    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.113839    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.114220    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.115737    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:48.111384    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.112145    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.113839    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.114220    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.115737    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:48.119824  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:48.119837  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:48.144575  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:48.144606  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:50.674890  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:50.689012  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:50.689130  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:50.747025  604010 cri.go:89] found id: ""
	I1213 11:54:50.747102  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.747125  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:50.747143  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:50.747232  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:50.775729  604010 cri.go:89] found id: ""
	I1213 11:54:50.775795  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.775812  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:50.775820  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:50.775887  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:50.799251  604010 cri.go:89] found id: ""
	I1213 11:54:50.799277  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.799286  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:50.799292  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:50.799380  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:50.822964  604010 cri.go:89] found id: ""
	I1213 11:54:50.823033  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.823047  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:50.823054  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:50.823125  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:50.851245  604010 cri.go:89] found id: ""
	I1213 11:54:50.851270  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.851279  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:50.851285  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:50.851346  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:50.877382  604010 cri.go:89] found id: ""
	I1213 11:54:50.877405  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.877414  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:50.877420  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:50.877478  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:50.903657  604010 cri.go:89] found id: ""
	I1213 11:54:50.903681  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.903690  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:50.903696  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:50.903754  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:50.931954  604010 cri.go:89] found id: ""
	I1213 11:54:50.931977  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.931992  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:50.932002  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:50.932016  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:50.988153  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:50.988188  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:51.004868  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:51.004912  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:51.078536  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:51.069572    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.070163    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.071963    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.072503    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.074005    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:51.069572    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.070163    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.071963    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.072503    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.074005    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:51.078558  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:51.078571  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:51.105933  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:51.105979  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:53.638010  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:53.648726  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:53.648799  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:53.692658  604010 cri.go:89] found id: ""
	I1213 11:54:53.692685  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.692693  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:53.692700  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:53.692760  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:53.728295  604010 cri.go:89] found id: ""
	I1213 11:54:53.728326  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.728335  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:53.728343  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:53.728402  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:53.768548  604010 cri.go:89] found id: ""
	I1213 11:54:53.768576  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.768585  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:53.768591  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:53.768649  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:53.808130  604010 cri.go:89] found id: ""
	I1213 11:54:53.808152  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.808161  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:53.808167  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:53.808231  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:53.832811  604010 cri.go:89] found id: ""
	I1213 11:54:53.832839  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.832849  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:53.832856  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:53.832916  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:53.857746  604010 cri.go:89] found id: ""
	I1213 11:54:53.857770  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.857778  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:53.857785  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:53.857844  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:53.881722  604010 cri.go:89] found id: ""
	I1213 11:54:53.881747  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.881756  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:53.881763  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:53.881830  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:53.907820  604010 cri.go:89] found id: ""
	I1213 11:54:53.907844  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.907854  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:53.907864  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:53.907877  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:53.963717  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:53.963753  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:53.979615  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:53.979645  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:54.065903  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:54.056577    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.057248    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.058603    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.059235    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.061166    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:54.056577    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.057248    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.058603    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.059235    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.061166    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:54.065924  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:54.065938  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:54.091653  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:54.091689  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:56.621960  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:56.633738  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:56.633810  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:56.692820  604010 cri.go:89] found id: ""
	I1213 11:54:56.692846  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.692856  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:56.692863  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:56.692924  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:56.758799  604010 cri.go:89] found id: ""
	I1213 11:54:56.758842  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.758870  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:56.758884  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:56.758978  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:56.784490  604010 cri.go:89] found id: ""
	I1213 11:54:56.784516  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.784525  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:56.784532  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:56.784593  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:56.808898  604010 cri.go:89] found id: ""
	I1213 11:54:56.808919  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.808928  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:56.808940  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:56.808998  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:56.833308  604010 cri.go:89] found id: ""
	I1213 11:54:56.833373  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.833398  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:56.833416  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:56.833489  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:56.862468  604010 cri.go:89] found id: ""
	I1213 11:54:56.862543  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.862568  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:56.862588  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:56.862678  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:56.891924  604010 cri.go:89] found id: ""
	I1213 11:54:56.891952  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.891962  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:56.891969  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:56.892033  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:56.916269  604010 cri.go:89] found id: ""
	I1213 11:54:56.916296  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.916306  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:56.916315  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:56.916327  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:56.980544  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:56.971761    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.972786    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.974371    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.974958    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.976490    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:56.971761    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.972786    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.974371    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.974958    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.976490    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:56.980565  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:56.980579  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:57.005423  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:57.005460  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:57.032993  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:57.033071  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:57.088966  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:57.089003  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:59.606260  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:59.617007  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:59.617079  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:59.644389  604010 cri.go:89] found id: ""
	I1213 11:54:59.644411  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.644420  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:59.644427  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:59.644484  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:59.689247  604010 cri.go:89] found id: ""
	I1213 11:54:59.689273  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.689282  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:59.689289  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:59.689348  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:59.729540  604010 cri.go:89] found id: ""
	I1213 11:54:59.729582  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.729591  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:59.729597  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:59.729658  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:59.759256  604010 cri.go:89] found id: ""
	I1213 11:54:59.759286  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.759295  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:59.759301  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:59.759362  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:59.788748  604010 cri.go:89] found id: ""
	I1213 11:54:59.788772  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.788780  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:59.788787  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:59.788846  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:59.817278  604010 cri.go:89] found id: ""
	I1213 11:54:59.817313  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.817322  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:59.817328  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:59.817389  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:59.842756  604010 cri.go:89] found id: ""
	I1213 11:54:59.842780  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.842788  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:59.842794  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:59.842862  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:59.868412  604010 cri.go:89] found id: ""
	I1213 11:54:59.868435  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.868443  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:59.868453  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:59.868464  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:59.924773  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:59.924808  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:59.940672  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:59.940704  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:00.041026  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:00.001683    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.002326    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.007036    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.009108    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.010359    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:00.001683    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.002326    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.007036    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.009108    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.010359    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:00.045695  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:00.045733  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:00.200188  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:00.200291  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:02.798329  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:02.808984  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:02.809067  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:02.836650  604010 cri.go:89] found id: ""
	I1213 11:55:02.836675  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.836684  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:02.836692  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:02.836755  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:02.861812  604010 cri.go:89] found id: ""
	I1213 11:55:02.861837  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.861846  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:02.861853  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:02.861915  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:02.892956  604010 cri.go:89] found id: ""
	I1213 11:55:02.892982  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.892992  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:02.892999  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:02.893061  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:02.921418  604010 cri.go:89] found id: ""
	I1213 11:55:02.921444  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.921454  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:02.921460  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:02.921517  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:02.945971  604010 cri.go:89] found id: ""
	I1213 11:55:02.945998  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.946007  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:02.946013  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:02.946071  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:02.971224  604010 cri.go:89] found id: ""
	I1213 11:55:02.971249  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.971258  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:02.971264  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:02.971322  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:02.996070  604010 cri.go:89] found id: ""
	I1213 11:55:02.996098  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.996107  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:02.996113  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:02.996175  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:03.026595  604010 cri.go:89] found id: ""
	I1213 11:55:03.026628  604010 logs.go:282] 0 containers: []
	W1213 11:55:03.026637  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:03.026647  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:03.026662  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:03.083030  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:03.083068  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:03.099216  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:03.099247  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:03.164245  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:03.155657    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.156486    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.158171    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.158870    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.160386    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:03.155657    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.156486    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.158171    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.158870    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.160386    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:03.164269  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:03.164287  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:03.190063  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:03.190105  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:05.717488  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:05.729517  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:05.729651  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:05.754839  604010 cri.go:89] found id: ""
	I1213 11:55:05.754862  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.754870  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:05.754877  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:05.754935  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:05.779444  604010 cri.go:89] found id: ""
	I1213 11:55:05.779470  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.779478  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:05.779486  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:05.779546  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:05.804435  604010 cri.go:89] found id: ""
	I1213 11:55:05.804460  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.804468  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:05.804475  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:05.804536  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:05.828365  604010 cri.go:89] found id: ""
	I1213 11:55:05.828431  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.828454  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:05.828473  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:05.828538  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:05.853088  604010 cri.go:89] found id: ""
	I1213 11:55:05.853114  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.853123  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:05.853129  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:05.853187  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:05.881265  604010 cri.go:89] found id: ""
	I1213 11:55:05.881288  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.881297  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:05.881303  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:05.881363  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:05.907771  604010 cri.go:89] found id: ""
	I1213 11:55:05.907795  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.907804  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:05.907811  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:05.907881  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:05.932155  604010 cri.go:89] found id: ""
	I1213 11:55:05.932181  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.932189  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:05.932199  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:05.932211  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:05.960440  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:05.960467  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:06.018319  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:06.018357  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:06.034573  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:06.034602  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:06.099936  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:06.091153    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.091939    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.093705    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.094323    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.095974    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:06.091153    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.091939    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.093705    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.094323    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.095974    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:06.099962  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:06.099975  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:08.626581  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:08.637490  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:08.637574  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:08.674556  604010 cri.go:89] found id: ""
	I1213 11:55:08.674581  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.674589  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:08.674598  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:08.674659  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:08.719063  604010 cri.go:89] found id: ""
	I1213 11:55:08.719087  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.719095  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:08.719101  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:08.719166  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:08.761839  604010 cri.go:89] found id: ""
	I1213 11:55:08.761863  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.761872  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:08.761878  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:08.761939  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:08.793242  604010 cri.go:89] found id: ""
	I1213 11:55:08.793266  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.793274  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:08.793281  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:08.793338  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:08.823380  604010 cri.go:89] found id: ""
	I1213 11:55:08.823406  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.823416  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:08.823424  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:08.823488  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:08.849669  604010 cri.go:89] found id: ""
	I1213 11:55:08.849696  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.849705  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:08.849712  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:08.849773  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:08.876618  604010 cri.go:89] found id: ""
	I1213 11:55:08.876684  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.876707  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:08.876726  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:08.876807  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:08.902762  604010 cri.go:89] found id: ""
	I1213 11:55:08.902802  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.902811  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:08.902820  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:08.902833  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:08.918880  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:08.918910  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:08.990155  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:08.981658    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.982141    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.984095    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.984454    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.986001    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:08.981658    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.982141    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.984095    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.984454    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.986001    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:08.990182  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:08.990196  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:09.017239  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:09.017278  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:09.049754  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:09.049785  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:11.607272  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:11.617804  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:11.617876  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:11.646336  604010 cri.go:89] found id: ""
	I1213 11:55:11.646359  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.646368  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:11.646374  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:11.646434  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:11.684464  604010 cri.go:89] found id: ""
	I1213 11:55:11.684490  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.684499  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:11.684505  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:11.684566  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:11.724793  604010 cri.go:89] found id: ""
	I1213 11:55:11.724816  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.724824  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:11.724831  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:11.724890  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:11.760776  604010 cri.go:89] found id: ""
	I1213 11:55:11.760799  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.760807  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:11.760814  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:11.760873  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:11.787122  604010 cri.go:89] found id: ""
	I1213 11:55:11.787195  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.787217  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:11.787237  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:11.787333  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:11.812257  604010 cri.go:89] found id: ""
	I1213 11:55:11.812283  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.812291  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:11.812298  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:11.812359  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:11.837304  604010 cri.go:89] found id: ""
	I1213 11:55:11.837341  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.837350  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:11.837356  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:11.837427  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:11.861726  604010 cri.go:89] found id: ""
	I1213 11:55:11.861759  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.861768  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:11.861778  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:11.861792  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:11.918248  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:11.918285  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:11.934535  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:11.934571  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:12.005308  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:11.993379    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.994149    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.995831    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.996328    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.998145    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:11.993379    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.994149    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.995831    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.996328    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.998145    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:12.005338  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:12.005351  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:12.031381  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:12.031415  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:14.558358  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:14.569230  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:14.569297  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:14.594108  604010 cri.go:89] found id: ""
	I1213 11:55:14.594186  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.594209  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:14.594231  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:14.594306  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:14.617763  604010 cri.go:89] found id: ""
	I1213 11:55:14.617784  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.617818  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:14.617824  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:14.617882  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:14.641477  604010 cri.go:89] found id: ""
	I1213 11:55:14.641499  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.641508  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:14.641514  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:14.641580  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:14.706320  604010 cri.go:89] found id: ""
	I1213 11:55:14.706395  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.706419  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:14.706438  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:14.706530  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:14.750579  604010 cri.go:89] found id: ""
	I1213 11:55:14.750602  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.750611  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:14.750617  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:14.750738  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:14.777264  604010 cri.go:89] found id: ""
	I1213 11:55:14.777299  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.777308  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:14.777321  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:14.777392  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:14.801675  604010 cri.go:89] found id: ""
	I1213 11:55:14.801750  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.801775  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:14.801794  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:14.801878  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:14.826273  604010 cri.go:89] found id: ""
	I1213 11:55:14.826308  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.826317  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:14.826327  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:14.826341  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:14.852456  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:14.852492  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:14.880309  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:14.880337  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:14.935692  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:14.935727  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:14.952137  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:14.952167  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:15.033989  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:15.011900    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.014560    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.015092    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.017168    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.018209    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:15.011900    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.014560    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.015092    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.017168    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.018209    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:17.535599  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:17.547401  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:17.547477  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:17.573160  604010 cri.go:89] found id: ""
	I1213 11:55:17.573190  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.573199  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:17.573206  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:17.573269  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:17.602638  604010 cri.go:89] found id: ""
	I1213 11:55:17.602664  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.602673  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:17.602679  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:17.602761  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:17.628217  604010 cri.go:89] found id: ""
	I1213 11:55:17.628242  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.628251  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:17.628258  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:17.628321  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:17.653857  604010 cri.go:89] found id: ""
	I1213 11:55:17.653923  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.653934  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:17.653941  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:17.654004  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:17.730131  604010 cri.go:89] found id: ""
	I1213 11:55:17.730166  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.730175  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:17.730211  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:17.730290  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:17.764018  604010 cri.go:89] found id: ""
	I1213 11:55:17.764045  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.764053  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:17.764060  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:17.764139  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:17.789006  604010 cri.go:89] found id: ""
	I1213 11:55:17.789029  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.789039  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:17.789045  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:17.789110  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:17.820038  604010 cri.go:89] found id: ""
	I1213 11:55:17.820061  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.820070  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:17.820080  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:17.820091  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:17.845672  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:17.845708  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:17.876520  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:17.876549  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:17.934113  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:17.934148  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:17.950852  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:17.950884  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:18.024225  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:18.014810    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.015320    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.017184    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.017872    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.019543    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:18.014810    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.015320    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.017184    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.017872    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.019543    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:20.526091  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:20.539006  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:20.539072  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:20.568228  604010 cri.go:89] found id: ""
	I1213 11:55:20.568252  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.568260  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:20.568266  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:20.568341  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:20.595603  604010 cri.go:89] found id: ""
	I1213 11:55:20.595632  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.595642  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:20.595648  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:20.595710  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:20.619697  604010 cri.go:89] found id: ""
	I1213 11:55:20.619723  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.619732  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:20.619739  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:20.619801  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:20.644480  604010 cri.go:89] found id: ""
	I1213 11:55:20.644507  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.644516  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:20.644523  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:20.644605  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:20.707263  604010 cri.go:89] found id: ""
	I1213 11:55:20.707286  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.707295  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:20.707301  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:20.707362  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:20.753734  604010 cri.go:89] found id: ""
	I1213 11:55:20.753758  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.753767  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:20.753773  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:20.753832  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:20.779244  604010 cri.go:89] found id: ""
	I1213 11:55:20.779267  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.779275  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:20.779282  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:20.779342  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:20.808050  604010 cri.go:89] found id: ""
	I1213 11:55:20.808127  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.808144  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:20.808155  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:20.808167  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:20.863714  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:20.863751  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:20.879958  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:20.879988  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:20.947629  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:20.938365    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.939048    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.940693    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.941317    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.943088    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:20.938365    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.939048    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.940693    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.941317    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.943088    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:20.947653  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:20.947668  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:20.972884  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:20.972921  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:23.506189  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:23.517150  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:23.517220  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:23.544888  604010 cri.go:89] found id: ""
	I1213 11:55:23.544912  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.544920  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:23.544927  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:23.544992  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:23.571162  604010 cri.go:89] found id: ""
	I1213 11:55:23.571189  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.571197  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:23.571204  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:23.571288  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:23.596593  604010 cri.go:89] found id: ""
	I1213 11:55:23.596618  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.596626  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:23.596633  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:23.596693  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:23.622396  604010 cri.go:89] found id: ""
	I1213 11:55:23.622424  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.622433  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:23.622439  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:23.622541  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:23.648441  604010 cri.go:89] found id: ""
	I1213 11:55:23.648468  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.648478  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:23.648484  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:23.648552  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:23.698559  604010 cri.go:89] found id: ""
	I1213 11:55:23.698586  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.698595  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:23.698601  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:23.698664  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:23.749855  604010 cri.go:89] found id: ""
	I1213 11:55:23.749883  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.749893  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:23.749905  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:23.749964  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:23.781499  604010 cri.go:89] found id: ""
	I1213 11:55:23.781527  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.781536  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:23.781547  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:23.781571  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:23.815145  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:23.815174  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:23.871093  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:23.871128  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:23.887427  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:23.887455  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:23.956327  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:23.948085    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.948683    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.950286    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.950824    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.952300    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:23.948085    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.948683    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.950286    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.950824    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.952300    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:23.956396  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:23.956417  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:26.482024  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:26.492511  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:26.492582  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:26.517699  604010 cri.go:89] found id: ""
	I1213 11:55:26.517777  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.517800  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:26.517818  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:26.517906  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:26.545138  604010 cri.go:89] found id: ""
	I1213 11:55:26.545207  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.545233  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:26.545251  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:26.545341  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:26.570019  604010 cri.go:89] found id: ""
	I1213 11:55:26.570090  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.570116  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:26.570134  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:26.570226  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:26.596752  604010 cri.go:89] found id: ""
	I1213 11:55:26.596831  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.596854  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:26.596869  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:26.596946  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:26.625280  604010 cri.go:89] found id: ""
	I1213 11:55:26.625306  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.625315  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:26.625322  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:26.625379  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:26.655489  604010 cri.go:89] found id: ""
	I1213 11:55:26.655513  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.655522  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:26.655528  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:26.655594  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:26.688001  604010 cri.go:89] found id: ""
	I1213 11:55:26.688028  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.688037  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:26.688043  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:26.688103  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:26.720200  604010 cri.go:89] found id: ""
	I1213 11:55:26.720226  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.720235  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:26.720244  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:26.720255  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:26.751334  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:26.751368  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:26.791793  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:26.791819  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:26.847456  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:26.847493  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:26.864079  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:26.864109  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:26.927248  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:26.919337    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.920135    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.921687    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.921990    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.923429    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:26.919337    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.920135    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.921687    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.921990    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.923429    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:29.427521  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:29.438225  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:29.438297  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:29.463111  604010 cri.go:89] found id: ""
	I1213 11:55:29.463137  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.463146  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:29.463154  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:29.463222  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:29.488474  604010 cri.go:89] found id: ""
	I1213 11:55:29.488504  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.488513  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:29.488519  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:29.488580  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:29.514792  604010 cri.go:89] found id: ""
	I1213 11:55:29.514815  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.514824  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:29.514830  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:29.514890  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:29.540502  604010 cri.go:89] found id: ""
	I1213 11:55:29.540528  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.540537  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:29.540544  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:29.540623  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:29.569010  604010 cri.go:89] found id: ""
	I1213 11:55:29.569035  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.569044  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:29.569050  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:29.569143  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:29.597354  604010 cri.go:89] found id: ""
	I1213 11:55:29.597381  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.597390  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:29.597396  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:29.597482  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:29.622205  604010 cri.go:89] found id: ""
	I1213 11:55:29.622230  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.622239  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:29.622245  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:29.622321  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:29.649830  604010 cri.go:89] found id: ""
	I1213 11:55:29.649856  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.649865  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:29.649874  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:29.649914  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:29.717017  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:29.717058  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:29.745372  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:29.745398  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:29.821563  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:29.813336    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.813962    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.815565    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.815994    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.817582    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:29.813336    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.813962    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.815565    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.815994    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.817582    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:29.821589  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:29.821603  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:29.847167  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:29.847206  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:32.379999  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:32.394044  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:32.394117  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:32.419725  604010 cri.go:89] found id: ""
	I1213 11:55:32.419751  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.419759  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:32.419767  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:32.419827  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:32.448514  604010 cri.go:89] found id: ""
	I1213 11:55:32.448537  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.448546  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:32.448552  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:32.448614  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:32.474220  604010 cri.go:89] found id: ""
	I1213 11:55:32.474257  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.474266  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:32.474272  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:32.474331  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:32.501945  604010 cri.go:89] found id: ""
	I1213 11:55:32.501970  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.501980  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:32.501987  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:32.502051  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:32.529117  604010 cri.go:89] found id: ""
	I1213 11:55:32.529143  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.529151  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:32.529159  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:32.529220  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:32.558516  604010 cri.go:89] found id: ""
	I1213 11:55:32.558545  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.558554  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:32.558563  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:32.558624  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:32.584351  604010 cri.go:89] found id: ""
	I1213 11:55:32.584375  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.584383  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:32.584390  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:32.584459  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:32.610180  604010 cri.go:89] found id: ""
	I1213 11:55:32.610203  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.610212  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:32.610222  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:32.610233  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:32.668609  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:32.668647  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:32.687093  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:32.687199  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:32.806632  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:32.798550    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.799065    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.800667    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.801088    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.802806    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:32.798550    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.799065    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.800667    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.801088    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.802806    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:32.806658  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:32.806670  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:32.832549  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:32.832585  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:35.361963  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:35.372809  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:35.372881  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:35.398138  604010 cri.go:89] found id: ""
	I1213 11:55:35.398164  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.398172  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:35.398178  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:35.398238  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:35.423828  604010 cri.go:89] found id: ""
	I1213 11:55:35.423854  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.423863  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:35.423870  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:35.423934  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:35.453483  604010 cri.go:89] found id: ""
	I1213 11:55:35.453508  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.453518  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:35.453524  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:35.453617  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:35.478270  604010 cri.go:89] found id: ""
	I1213 11:55:35.478294  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.478303  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:35.478310  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:35.478373  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:35.508196  604010 cri.go:89] found id: ""
	I1213 11:55:35.508226  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.508235  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:35.508242  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:35.508327  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:35.537327  604010 cri.go:89] found id: ""
	I1213 11:55:35.537359  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.537369  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:35.537401  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:35.537490  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:35.564387  604010 cri.go:89] found id: ""
	I1213 11:55:35.564412  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.564420  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:35.564427  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:35.564483  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:35.589741  604010 cri.go:89] found id: ""
	I1213 11:55:35.589766  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.589776  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:35.589787  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:35.589798  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:35.645240  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:35.645275  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:35.672440  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:35.672532  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:35.779839  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:35.770429    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.771175    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.772996    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.773416    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.775177    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:35.770429    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.771175    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.772996    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.773416    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.775177    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:35.779861  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:35.779874  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:35.804945  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:35.804983  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:38.336379  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:38.347209  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:38.347278  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:38.372679  604010 cri.go:89] found id: ""
	I1213 11:55:38.372706  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.372716  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:38.372723  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:38.372781  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:38.401308  604010 cri.go:89] found id: ""
	I1213 11:55:38.401340  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.401354  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:38.401360  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:38.401428  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:38.425990  604010 cri.go:89] found id: ""
	I1213 11:55:38.426025  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.426034  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:38.426040  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:38.426097  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:38.452858  604010 cri.go:89] found id: ""
	I1213 11:55:38.452884  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.452892  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:38.452900  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:38.452958  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:38.477766  604010 cri.go:89] found id: ""
	I1213 11:55:38.477791  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.477800  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:38.477807  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:38.477876  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:38.503003  604010 cri.go:89] found id: ""
	I1213 11:55:38.503028  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.503037  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:38.503043  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:38.503110  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:38.532923  604010 cri.go:89] found id: ""
	I1213 11:55:38.532946  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.532955  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:38.532962  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:38.533021  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:38.561367  604010 cri.go:89] found id: ""
	I1213 11:55:38.561389  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.561397  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:38.561406  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:38.561425  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:38.627276  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:38.618551    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.619310    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.621183    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.621748    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.623328    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:38.618551    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.619310    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.621183    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.621748    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.623328    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:38.627341  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:38.627361  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:38.652980  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:38.653021  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:38.702202  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:38.702236  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:38.775658  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:38.775742  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:41.293324  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:41.304911  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:41.304988  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:41.329954  604010 cri.go:89] found id: ""
	I1213 11:55:41.329981  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.329990  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:41.329997  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:41.330068  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:41.356810  604010 cri.go:89] found id: ""
	I1213 11:55:41.356835  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.356845  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:41.356851  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:41.356911  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:41.382782  604010 cri.go:89] found id: ""
	I1213 11:55:41.382807  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.382816  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:41.382823  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:41.382882  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:41.411145  604010 cri.go:89] found id: ""
	I1213 11:55:41.411170  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.411179  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:41.411186  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:41.411242  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:41.439686  604010 cri.go:89] found id: ""
	I1213 11:55:41.439713  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.439722  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:41.439729  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:41.439797  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:41.463861  604010 cri.go:89] found id: ""
	I1213 11:55:41.463884  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.463893  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:41.463900  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:41.463958  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:41.488219  604010 cri.go:89] found id: ""
	I1213 11:55:41.488243  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.488252  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:41.488258  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:41.488339  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:41.513569  604010 cri.go:89] found id: ""
	I1213 11:55:41.513600  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.513609  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:41.513619  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:41.513656  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:41.570549  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:41.570585  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:41.587559  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:41.587588  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:41.654460  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:41.646598    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.647136    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.648610    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.649143    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.650745    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:41.646598    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.647136    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.648610    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.649143    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.650745    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:41.654481  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:41.654494  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:41.679884  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:41.679918  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:44.238824  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:44.249658  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:44.249735  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:44.274262  604010 cri.go:89] found id: ""
	I1213 11:55:44.274287  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.274297  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:44.274303  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:44.274365  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:44.298725  604010 cri.go:89] found id: ""
	I1213 11:55:44.298750  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.298759  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:44.298765  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:44.298831  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:44.332989  604010 cri.go:89] found id: ""
	I1213 11:55:44.333019  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.333028  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:44.333035  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:44.333095  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:44.358205  604010 cri.go:89] found id: ""
	I1213 11:55:44.358229  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.358238  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:44.358250  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:44.358313  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:44.383989  604010 cri.go:89] found id: ""
	I1213 11:55:44.384017  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.384027  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:44.384034  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:44.384099  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:44.409651  604010 cri.go:89] found id: ""
	I1213 11:55:44.409677  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.409686  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:44.409692  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:44.409751  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:44.435253  604010 cri.go:89] found id: ""
	I1213 11:55:44.435280  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.435288  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:44.435295  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:44.435354  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:44.459342  604010 cri.go:89] found id: ""
	I1213 11:55:44.459379  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.459388  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:44.459398  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:44.459409  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:44.527760  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:44.518804    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.519537    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.521331    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.521838    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.523375    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:44.518804    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.519537    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.521331    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.521838    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.523375    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:44.527781  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:44.527793  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:44.554052  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:44.554086  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:44.583553  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:44.583582  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:44.639690  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:44.639723  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:47.156860  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:47.167658  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:47.167728  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:47.191689  604010 cri.go:89] found id: ""
	I1213 11:55:47.191714  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.191723  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:47.191730  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:47.191790  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:47.217625  604010 cri.go:89] found id: ""
	I1213 11:55:47.217652  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.217665  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:47.217679  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:47.217756  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:47.246057  604010 cri.go:89] found id: ""
	I1213 11:55:47.246080  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.246088  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:47.246094  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:47.246153  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:47.272649  604010 cri.go:89] found id: ""
	I1213 11:55:47.272673  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.272682  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:47.272688  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:47.272747  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:47.297156  604010 cri.go:89] found id: ""
	I1213 11:55:47.297178  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.297186  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:47.297192  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:47.297249  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:47.321533  604010 cri.go:89] found id: ""
	I1213 11:55:47.321555  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.321563  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:47.321570  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:47.321647  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:47.347526  604010 cri.go:89] found id: ""
	I1213 11:55:47.347548  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.347558  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:47.347566  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:47.347743  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:47.373360  604010 cri.go:89] found id: ""
	I1213 11:55:47.373437  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.373466  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:47.373491  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:47.373544  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:47.406388  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:47.406463  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:47.467132  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:47.467169  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:47.482951  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:47.482977  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:47.547530  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:47.538747    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.539246    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.540864    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.541466    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.543147    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:47.538747    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.539246    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.540864    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.541466    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.543147    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:47.547599  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:47.547625  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:50.076734  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:50.088146  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:50.088221  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:50.114846  604010 cri.go:89] found id: ""
	I1213 11:55:50.114871  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.114879  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:50.114885  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:50.114952  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:50.140346  604010 cri.go:89] found id: ""
	I1213 11:55:50.140383  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.140393  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:50.140400  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:50.140461  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:50.165612  604010 cri.go:89] found id: ""
	I1213 11:55:50.165647  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.165656  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:50.165663  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:50.165735  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:50.193167  604010 cri.go:89] found id: ""
	I1213 11:55:50.193196  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.193205  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:50.193211  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:50.193288  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:50.217552  604010 cri.go:89] found id: ""
	I1213 11:55:50.217602  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.217622  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:50.217630  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:50.217703  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:50.243207  604010 cri.go:89] found id: ""
	I1213 11:55:50.243230  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.243240  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:50.243246  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:50.243306  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:50.267889  604010 cri.go:89] found id: ""
	I1213 11:55:50.267961  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.267980  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:50.267988  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:50.268050  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:50.293393  604010 cri.go:89] found id: ""
	I1213 11:55:50.293420  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.293429  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:50.293448  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:50.293461  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:50.358945  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:50.350414    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.351257    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.352886    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.353223    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.354777    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:50.350414    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.351257    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.352886    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.353223    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.354777    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:50.358967  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:50.358982  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:50.384886  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:50.384922  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:50.416671  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:50.416697  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:50.472398  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:50.472437  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:52.988724  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:53.000673  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:53.000825  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:53.028787  604010 cri.go:89] found id: ""
	I1213 11:55:53.028812  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.028822  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:53.028829  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:53.028960  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:53.059024  604010 cri.go:89] found id: ""
	I1213 11:55:53.059060  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.059069  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:53.059076  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:53.059137  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:53.084415  604010 cri.go:89] found id: ""
	I1213 11:55:53.084443  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.084452  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:53.084459  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:53.084519  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:53.111367  604010 cri.go:89] found id: ""
	I1213 11:55:53.111402  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.111413  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:53.111420  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:53.111485  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:53.138948  604010 cri.go:89] found id: ""
	I1213 11:55:53.138973  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.138992  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:53.138999  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:53.139058  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:53.164317  604010 cri.go:89] found id: ""
	I1213 11:55:53.164341  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.164350  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:53.164363  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:53.164420  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:53.189237  604010 cri.go:89] found id: ""
	I1213 11:55:53.189263  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.189284  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:53.189291  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:53.189365  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:53.213792  604010 cri.go:89] found id: ""
	I1213 11:55:53.213831  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.213840  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:53.213849  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:53.213864  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:53.268812  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:53.268852  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:53.284561  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:53.284592  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:53.350505  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:53.342240    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.342928    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.344529    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.345039    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.346717    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:53.342240    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.342928    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.344529    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.345039    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.346717    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:53.350528  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:53.350540  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:53.375550  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:53.375586  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:55.903770  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:55.916528  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:55.916606  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:55.974216  604010 cri.go:89] found id: ""
	I1213 11:55:55.974238  604010 logs.go:282] 0 containers: []
	W1213 11:55:55.974246  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:55.974254  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:55.974316  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:56.009212  604010 cri.go:89] found id: ""
	I1213 11:55:56.009235  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.009243  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:56.009250  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:56.009308  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:56.036696  604010 cri.go:89] found id: ""
	I1213 11:55:56.036722  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.036731  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:56.036738  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:56.036821  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:56.062550  604010 cri.go:89] found id: ""
	I1213 11:55:56.062577  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.062586  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:56.062592  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:56.062649  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:56.087384  604010 cri.go:89] found id: ""
	I1213 11:55:56.087410  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.087419  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:56.087425  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:56.087506  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:56.113129  604010 cri.go:89] found id: ""
	I1213 11:55:56.113153  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.113164  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:56.113171  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:56.113234  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:56.137999  604010 cri.go:89] found id: ""
	I1213 11:55:56.138021  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.138030  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:56.138036  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:56.138094  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:56.164815  604010 cri.go:89] found id: ""
	I1213 11:55:56.164841  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.164851  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:56.164861  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:56.164872  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:56.190007  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:56.190042  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:56.222068  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:56.222097  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:56.277067  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:56.277104  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:56.293465  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:56.293495  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:56.360755  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:56.351282    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.352626    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.353483    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.354403    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.356173    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:56.351282    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.352626    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.353483    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.354403    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.356173    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:58.861486  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:58.872284  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:58.872365  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:58.898051  604010 cri.go:89] found id: ""
	I1213 11:55:58.898077  604010 logs.go:282] 0 containers: []
	W1213 11:55:58.898086  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:58.898093  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:58.898152  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:58.937804  604010 cri.go:89] found id: ""
	I1213 11:55:58.937834  604010 logs.go:282] 0 containers: []
	W1213 11:55:58.937852  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:58.937865  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:58.937957  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:58.987256  604010 cri.go:89] found id: ""
	I1213 11:55:58.987290  604010 logs.go:282] 0 containers: []
	W1213 11:55:58.987301  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:58.987308  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:58.987378  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:59.018252  604010 cri.go:89] found id: ""
	I1213 11:55:59.018274  604010 logs.go:282] 0 containers: []
	W1213 11:55:59.018282  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:59.018289  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:59.018350  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:59.046993  604010 cri.go:89] found id: ""
	I1213 11:55:59.047018  604010 logs.go:282] 0 containers: []
	W1213 11:55:59.047027  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:59.047033  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:59.047089  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:59.072813  604010 cri.go:89] found id: ""
	I1213 11:55:59.072888  604010 logs.go:282] 0 containers: []
	W1213 11:55:59.072903  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:59.072913  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:59.072988  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:59.097766  604010 cri.go:89] found id: ""
	I1213 11:55:59.097792  604010 logs.go:282] 0 containers: []
	W1213 11:55:59.097801  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:59.097808  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:59.097868  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:59.125013  604010 cri.go:89] found id: ""
	I1213 11:55:59.125038  604010 logs.go:282] 0 containers: []
	W1213 11:55:59.125047  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:59.125056  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:59.125070  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:59.150130  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:59.150164  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:59.178033  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:59.178107  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:59.233761  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:59.233795  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:59.249736  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:59.249772  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:59.314577  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:59.305285    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.306134    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.307637    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.308126    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.310000    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:59.305285    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.306134    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.307637    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.308126    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.310000    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:01.814837  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:01.826268  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:01.826352  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:01.856935  604010 cri.go:89] found id: ""
	I1213 11:56:01.856960  604010 logs.go:282] 0 containers: []
	W1213 11:56:01.856969  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:01.856979  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:01.857039  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:01.884429  604010 cri.go:89] found id: ""
	I1213 11:56:01.884454  604010 logs.go:282] 0 containers: []
	W1213 11:56:01.884463  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:01.884470  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:01.884530  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:01.929790  604010 cri.go:89] found id: ""
	I1213 11:56:01.929812  604010 logs.go:282] 0 containers: []
	W1213 11:56:01.929821  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:01.929828  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:01.929890  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:01.997657  604010 cri.go:89] found id: ""
	I1213 11:56:01.997686  604010 logs.go:282] 0 containers: []
	W1213 11:56:01.997703  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:01.997713  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:01.997785  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:02.027667  604010 cri.go:89] found id: ""
	I1213 11:56:02.027692  604010 logs.go:282] 0 containers: []
	W1213 11:56:02.027701  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:02.027707  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:02.027770  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:02.052911  604010 cri.go:89] found id: ""
	I1213 11:56:02.052935  604010 logs.go:282] 0 containers: []
	W1213 11:56:02.052944  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:02.052950  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:02.053009  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:02.078744  604010 cri.go:89] found id: ""
	I1213 11:56:02.078813  604010 logs.go:282] 0 containers: []
	W1213 11:56:02.078839  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:02.078857  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:02.078946  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:02.104065  604010 cri.go:89] found id: ""
	I1213 11:56:02.104136  604010 logs.go:282] 0 containers: []
	W1213 11:56:02.104158  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:02.104181  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:02.104219  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:02.177602  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:02.166576    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.167162    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.170937    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.171543    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.173272    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:02.166576    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.167162    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.170937    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.171543    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.173272    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:02.177623  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:02.177635  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:02.203025  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:02.203064  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:02.232249  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:02.232275  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:02.288746  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:02.288781  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:04.806667  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:04.817452  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:04.817526  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:04.843671  604010 cri.go:89] found id: ""
	I1213 11:56:04.843696  604010 logs.go:282] 0 containers: []
	W1213 11:56:04.843705  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:04.843712  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:04.843770  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:04.869847  604010 cri.go:89] found id: ""
	I1213 11:56:04.869873  604010 logs.go:282] 0 containers: []
	W1213 11:56:04.869882  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:04.869889  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:04.869949  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:04.895727  604010 cri.go:89] found id: ""
	I1213 11:56:04.895750  604010 logs.go:282] 0 containers: []
	W1213 11:56:04.895759  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:04.895766  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:04.895874  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:04.958057  604010 cri.go:89] found id: ""
	I1213 11:56:04.958083  604010 logs.go:282] 0 containers: []
	W1213 11:56:04.958093  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:04.958102  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:04.958164  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:05.011151  604010 cri.go:89] found id: ""
	I1213 11:56:05.011180  604010 logs.go:282] 0 containers: []
	W1213 11:56:05.011191  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:05.011198  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:05.011301  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:05.042226  604010 cri.go:89] found id: ""
	I1213 11:56:05.042257  604010 logs.go:282] 0 containers: []
	W1213 11:56:05.042267  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:05.042274  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:05.042344  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:05.067033  604010 cri.go:89] found id: ""
	I1213 11:56:05.067057  604010 logs.go:282] 0 containers: []
	W1213 11:56:05.067066  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:05.067073  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:05.067137  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:05.092704  604010 cri.go:89] found id: ""
	I1213 11:56:05.092729  604010 logs.go:282] 0 containers: []
	W1213 11:56:05.092740  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:05.092751  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:05.092789  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:05.149091  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:05.149142  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:05.165497  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:05.165536  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:05.234289  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:05.225131    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.225892    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.227653    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.228318    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.230170    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:05.225131    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.225892    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.227653    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.228318    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.230170    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:05.234313  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:05.234326  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:05.259839  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:05.259877  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:07.795276  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:07.805797  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:07.805865  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:07.833431  604010 cri.go:89] found id: ""
	I1213 11:56:07.833458  604010 logs.go:282] 0 containers: []
	W1213 11:56:07.833467  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:07.833474  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:07.833533  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:07.859570  604010 cri.go:89] found id: ""
	I1213 11:56:07.859596  604010 logs.go:282] 0 containers: []
	W1213 11:56:07.859605  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:07.859612  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:07.859680  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:07.885597  604010 cri.go:89] found id: ""
	I1213 11:56:07.885621  604010 logs.go:282] 0 containers: []
	W1213 11:56:07.885630  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:07.885636  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:07.885693  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:07.932272  604010 cri.go:89] found id: ""
	I1213 11:56:07.932295  604010 logs.go:282] 0 containers: []
	W1213 11:56:07.932304  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:07.932311  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:07.932368  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:07.971123  604010 cri.go:89] found id: ""
	I1213 11:56:07.971146  604010 logs.go:282] 0 containers: []
	W1213 11:56:07.971156  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:07.971162  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:07.971223  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:08.020370  604010 cri.go:89] found id: ""
	I1213 11:56:08.020442  604010 logs.go:282] 0 containers: []
	W1213 11:56:08.020470  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:08.020488  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:08.020576  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:08.050772  604010 cri.go:89] found id: ""
	I1213 11:56:08.050843  604010 logs.go:282] 0 containers: []
	W1213 11:56:08.050870  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:08.050888  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:08.050977  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:08.076860  604010 cri.go:89] found id: ""
	I1213 11:56:08.076891  604010 logs.go:282] 0 containers: []
	W1213 11:56:08.076901  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:08.076911  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:08.076923  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:08.136737  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:08.136772  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:08.152700  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:08.152856  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:08.216955  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:08.208521    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.209263    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.210851    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.211330    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.212940    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:08.208521    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.209263    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.210851    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.211330    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.212940    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:08.217027  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:08.217055  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:08.242524  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:08.242562  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:10.774825  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:10.785504  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:10.785573  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:10.812402  604010 cri.go:89] found id: ""
	I1213 11:56:10.812424  604010 logs.go:282] 0 containers: []
	W1213 11:56:10.812433  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:10.812440  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:10.812495  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:10.837362  604010 cri.go:89] found id: ""
	I1213 11:56:10.837387  604010 logs.go:282] 0 containers: []
	W1213 11:56:10.837396  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:10.837402  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:10.837461  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:10.862348  604010 cri.go:89] found id: ""
	I1213 11:56:10.862374  604010 logs.go:282] 0 containers: []
	W1213 11:56:10.862382  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:10.862389  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:10.862447  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:10.886922  604010 cri.go:89] found id: ""
	I1213 11:56:10.886999  604010 logs.go:282] 0 containers: []
	W1213 11:56:10.887020  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:10.887038  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:10.887121  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:10.931278  604010 cri.go:89] found id: ""
	I1213 11:56:10.931347  604010 logs.go:282] 0 containers: []
	W1213 11:56:10.931369  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:10.931387  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:10.931475  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:10.974160  604010 cri.go:89] found id: ""
	I1213 11:56:10.974226  604010 logs.go:282] 0 containers: []
	W1213 11:56:10.974254  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:10.974272  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:10.974357  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:11.010218  604010 cri.go:89] found id: ""
	I1213 11:56:11.010290  604010 logs.go:282] 0 containers: []
	W1213 11:56:11.010313  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:11.010332  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:11.010424  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:11.039062  604010 cri.go:89] found id: ""
	I1213 11:56:11.039097  604010 logs.go:282] 0 containers: []
	W1213 11:56:11.039108  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:11.039118  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:11.039130  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:11.095996  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:11.096035  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:11.112552  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:11.112583  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:11.181416  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:11.172048    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.172697    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.174491    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.175376    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.177169    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:11.172048    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.172697    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.174491    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.175376    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.177169    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:11.181436  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:11.181451  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:11.206963  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:11.207000  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:13.739447  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:13.750286  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:13.750359  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:13.776350  604010 cri.go:89] found id: ""
	I1213 11:56:13.776379  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.776388  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:13.776395  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:13.776460  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:13.800680  604010 cri.go:89] found id: ""
	I1213 11:56:13.800705  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.800714  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:13.800721  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:13.800780  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:13.826000  604010 cri.go:89] found id: ""
	I1213 11:56:13.826038  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.826050  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:13.826072  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:13.826155  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:13.850538  604010 cri.go:89] found id: ""
	I1213 11:56:13.850564  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.850582  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:13.850611  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:13.850706  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:13.879462  604010 cri.go:89] found id: ""
	I1213 11:56:13.879488  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.879496  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:13.879503  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:13.879559  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:13.904388  604010 cri.go:89] found id: ""
	I1213 11:56:13.904414  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.904422  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:13.904432  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:13.904488  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:13.936193  604010 cri.go:89] found id: ""
	I1213 11:56:13.936221  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.936229  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:13.936236  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:13.936304  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:13.979520  604010 cri.go:89] found id: ""
	I1213 11:56:13.979547  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.979556  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:13.979566  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:13.979577  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:14.047872  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:14.047909  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:14.064531  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:14.064559  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:14.132145  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:14.123439    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.124184    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.125827    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.126337    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.128067    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:14.123439    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.124184    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.125827    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.126337    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.128067    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:14.132167  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:14.132180  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:14.158143  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:14.158181  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:16.686213  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:16.696766  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:16.696836  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:16.720811  604010 cri.go:89] found id: ""
	I1213 11:56:16.720840  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.720849  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:16.720856  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:16.720916  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:16.746135  604010 cri.go:89] found id: ""
	I1213 11:56:16.746162  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.746170  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:16.746177  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:16.746235  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:16.772135  604010 cri.go:89] found id: ""
	I1213 11:56:16.772162  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.772171  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:16.772177  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:16.772263  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:16.801712  604010 cri.go:89] found id: ""
	I1213 11:56:16.801738  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.801748  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:16.801754  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:16.801813  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:16.825625  604010 cri.go:89] found id: ""
	I1213 11:56:16.825649  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.825658  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:16.825664  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:16.825723  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:16.850464  604010 cri.go:89] found id: ""
	I1213 11:56:16.850490  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.850498  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:16.850505  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:16.850561  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:16.882804  604010 cri.go:89] found id: ""
	I1213 11:56:16.882826  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.882835  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:16.882848  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:16.882906  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:16.908046  604010 cri.go:89] found id: ""
	I1213 11:56:16.908071  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.908080  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:16.908090  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:16.908104  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:17.008503  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:17.008590  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:17.024851  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:17.024884  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:17.092834  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:17.083994    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.084849    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.086559    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.087267    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.088871    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:17.083994    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.084849    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.086559    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.087267    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.088871    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:17.092854  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:17.092867  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:17.118299  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:17.118334  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:19.647201  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:19.658196  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:19.658313  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:19.681845  604010 cri.go:89] found id: ""
	I1213 11:56:19.681924  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.681947  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:19.681966  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:19.682053  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:19.707693  604010 cri.go:89] found id: ""
	I1213 11:56:19.707717  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.707727  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:19.707733  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:19.707809  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:19.732762  604010 cri.go:89] found id: ""
	I1213 11:56:19.732788  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.732797  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:19.732804  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:19.732884  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:19.757359  604010 cri.go:89] found id: ""
	I1213 11:56:19.757393  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.757402  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:19.757423  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:19.757500  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:19.785446  604010 cri.go:89] found id: ""
	I1213 11:56:19.785473  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.785482  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:19.785489  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:19.785610  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:19.812583  604010 cri.go:89] found id: ""
	I1213 11:56:19.812607  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.812616  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:19.812623  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:19.812681  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:19.836875  604010 cri.go:89] found id: ""
	I1213 11:56:19.836901  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.836910  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:19.836919  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:19.837022  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:19.861557  604010 cri.go:89] found id: ""
	I1213 11:56:19.861584  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.861595  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:19.861610  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:19.861631  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:19.920472  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:19.920510  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:19.973429  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:19.973459  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:20.062908  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:20.053401    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.054064    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.055967    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.056677    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.058665    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:20.053401    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.054064    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.055967    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.056677    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.058665    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:20.062932  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:20.062945  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:20.089847  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:20.089889  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:22.621952  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:22.633355  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:22.633434  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:22.661131  604010 cri.go:89] found id: ""
	I1213 11:56:22.661156  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.661165  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:22.661172  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:22.661231  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:22.687274  604010 cri.go:89] found id: ""
	I1213 11:56:22.687309  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.687319  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:22.687325  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:22.687385  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:22.712134  604010 cri.go:89] found id: ""
	I1213 11:56:22.712162  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.712177  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:22.712184  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:22.712243  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:22.737658  604010 cri.go:89] found id: ""
	I1213 11:56:22.737684  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.737693  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:22.737699  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:22.737756  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:22.762933  604010 cri.go:89] found id: ""
	I1213 11:56:22.762958  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.762966  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:22.762973  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:22.763030  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:22.787428  604010 cri.go:89] found id: ""
	I1213 11:56:22.787453  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.787463  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:22.787469  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:22.787531  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:22.812716  604010 cri.go:89] found id: ""
	I1213 11:56:22.812746  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.812754  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:22.812761  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:22.812849  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:22.837817  604010 cri.go:89] found id: ""
	I1213 11:56:22.837844  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.837853  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:22.837863  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:22.837883  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:22.893260  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:22.893294  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:22.917278  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:22.917388  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:23.026082  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:23.017267    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.017959    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.019734    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.020131    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.021757    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:23.017267    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.017959    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.019734    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.020131    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.021757    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:23.026106  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:23.026120  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:23.052026  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:23.052065  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:25.580545  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:25.591333  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:25.591403  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:25.616731  604010 cri.go:89] found id: ""
	I1213 11:56:25.616754  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.616764  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:25.616771  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:25.616827  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:25.646111  604010 cri.go:89] found id: ""
	I1213 11:56:25.646135  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.646144  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:25.646151  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:25.646212  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:25.674261  604010 cri.go:89] found id: ""
	I1213 11:56:25.674284  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.674293  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:25.674300  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:25.674358  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:25.700613  604010 cri.go:89] found id: ""
	I1213 11:56:25.700636  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.700644  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:25.700650  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:25.700707  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:25.728704  604010 cri.go:89] found id: ""
	I1213 11:56:25.728789  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.728805  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:25.728818  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:25.728885  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:25.761516  604010 cri.go:89] found id: ""
	I1213 11:56:25.761538  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.761548  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:25.761555  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:25.761635  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:25.786867  604010 cri.go:89] found id: ""
	I1213 11:56:25.786895  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.786905  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:25.786911  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:25.786970  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:25.811462  604010 cri.go:89] found id: ""
	I1213 11:56:25.811485  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.811493  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:25.811503  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:25.811514  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:25.866924  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:25.866955  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:25.883500  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:25.883530  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:25.977779  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:25.966190    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.967164    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.969705    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.971514    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.972246    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:25.966190    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.967164    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.969705    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.971514    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.972246    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:25.977806  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:25.977819  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:26.009949  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:26.010030  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:28.542187  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:28.552481  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:28.552607  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:28.581578  604010 cri.go:89] found id: ""
	I1213 11:56:28.581611  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.581627  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:28.581634  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:28.581690  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:28.607125  604010 cri.go:89] found id: ""
	I1213 11:56:28.607149  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.607157  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:28.607163  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:28.607220  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:28.632720  604010 cri.go:89] found id: ""
	I1213 11:56:28.632747  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.632758  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:28.632765  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:28.632822  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:28.658222  604010 cri.go:89] found id: ""
	I1213 11:56:28.658251  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.658260  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:28.658267  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:28.658325  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:28.682387  604010 cri.go:89] found id: ""
	I1213 11:56:28.682425  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.682436  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:28.682443  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:28.682519  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:28.707965  604010 cri.go:89] found id: ""
	I1213 11:56:28.708001  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.708011  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:28.708024  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:28.708094  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:28.737087  604010 cri.go:89] found id: ""
	I1213 11:56:28.737115  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.737124  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:28.737130  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:28.737189  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:28.761982  604010 cri.go:89] found id: ""
	I1213 11:56:28.762059  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.762081  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:28.762108  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:28.762148  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:28.817649  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:28.817687  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:28.833874  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:28.833904  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:28.901287  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:28.892846    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.893499    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.895107    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.895608    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.897226    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:28.892846    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.893499    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.895107    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.895608    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.897226    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:28.901308  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:28.901319  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:28.943036  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:28.943114  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:31.504085  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:31.516702  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:31.516776  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:31.541829  604010 cri.go:89] found id: ""
	I1213 11:56:31.541852  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.541861  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:31.541868  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:31.541927  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:31.567128  604010 cri.go:89] found id: ""
	I1213 11:56:31.567153  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.567162  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:31.567169  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:31.567228  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:31.592889  604010 cri.go:89] found id: ""
	I1213 11:56:31.592914  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.592924  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:31.592931  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:31.592988  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:31.620810  604010 cri.go:89] found id: ""
	I1213 11:56:31.620834  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.620843  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:31.620850  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:31.620907  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:31.645931  604010 cri.go:89] found id: ""
	I1213 11:56:31.645958  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.645968  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:31.645975  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:31.646034  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:31.671037  604010 cri.go:89] found id: ""
	I1213 11:56:31.671065  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.671074  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:31.671116  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:31.671180  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:31.696779  604010 cri.go:89] found id: ""
	I1213 11:56:31.696805  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.696814  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:31.696820  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:31.696886  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:31.721074  604010 cri.go:89] found id: ""
	I1213 11:56:31.721152  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.721175  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:31.721198  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:31.721238  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:31.776685  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:31.776720  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:31.793212  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:31.793241  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:31.856954  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:31.848666   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.849288   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.850793   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.851220   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.852660   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:31.848666   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.849288   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.850793   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.851220   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.852660   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:31.857017  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:31.857044  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:31.882038  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:31.882070  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:34.425618  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:34.436018  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:34.436163  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:34.460322  604010 cri.go:89] found id: ""
	I1213 11:56:34.460347  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.460356  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:34.460362  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:34.460442  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:34.484514  604010 cri.go:89] found id: ""
	I1213 11:56:34.484582  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.484607  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:34.484622  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:34.484695  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:34.513969  604010 cri.go:89] found id: ""
	I1213 11:56:34.514006  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.514016  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:34.514023  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:34.514089  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:34.541219  604010 cri.go:89] found id: ""
	I1213 11:56:34.541245  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.541254  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:34.541260  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:34.541323  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:34.570631  604010 cri.go:89] found id: ""
	I1213 11:56:34.570653  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.570662  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:34.570668  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:34.570749  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:34.594597  604010 cri.go:89] found id: ""
	I1213 11:56:34.594636  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.594645  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:34.594651  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:34.594741  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:34.618131  604010 cri.go:89] found id: ""
	I1213 11:56:34.618159  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.618168  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:34.618174  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:34.618230  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:34.645177  604010 cri.go:89] found id: ""
	I1213 11:56:34.645204  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.645213  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:34.645223  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:34.645235  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:34.674203  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:34.674235  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:34.731298  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:34.731332  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:34.747591  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:34.747623  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:34.811066  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:34.802515   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.803209   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.804716   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.805051   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.806504   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:34.802515   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.803209   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.804716   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.805051   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.806504   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:34.811137  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:34.811171  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:37.342058  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:37.352580  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:37.352649  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:37.376663  604010 cri.go:89] found id: ""
	I1213 11:56:37.376689  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.376698  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:37.376704  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:37.376763  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:37.400694  604010 cri.go:89] found id: ""
	I1213 11:56:37.400720  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.400728  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:37.400735  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:37.400796  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:37.425687  604010 cri.go:89] found id: ""
	I1213 11:56:37.425715  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.425724  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:37.425730  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:37.425787  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:37.450160  604010 cri.go:89] found id: ""
	I1213 11:56:37.450189  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.450198  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:37.450205  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:37.450266  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:37.475110  604010 cri.go:89] found id: ""
	I1213 11:56:37.475133  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.475142  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:37.475149  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:37.475207  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:37.499102  604010 cri.go:89] found id: ""
	I1213 11:56:37.499171  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.499196  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:37.499207  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:37.499282  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:37.528584  604010 cri.go:89] found id: ""
	I1213 11:56:37.528609  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.528618  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:37.528624  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:37.528708  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:37.554175  604010 cri.go:89] found id: ""
	I1213 11:56:37.554259  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.554283  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:37.554304  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:37.554347  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:37.612670  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:37.612706  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:37.629187  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:37.629218  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:37.694612  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:37.685617   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.686619   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.688268   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.688681   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.690407   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:37.685617   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.686619   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.688268   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.688681   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.690407   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:37.694640  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:37.694653  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:37.719952  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:37.719988  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:40.252201  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:40.265281  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:40.265368  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:40.289761  604010 cri.go:89] found id: ""
	I1213 11:56:40.289841  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.289865  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:40.289885  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:40.289969  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:40.314886  604010 cri.go:89] found id: ""
	I1213 11:56:40.314911  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.314920  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:40.314928  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:40.314988  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:40.340433  604010 cri.go:89] found id: ""
	I1213 11:56:40.340460  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.340469  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:40.340475  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:40.340535  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:40.369630  604010 cri.go:89] found id: ""
	I1213 11:56:40.369657  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.369666  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:40.369672  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:40.369730  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:40.396456  604010 cri.go:89] found id: ""
	I1213 11:56:40.396480  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.396489  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:40.396495  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:40.396550  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:40.420915  604010 cri.go:89] found id: ""
	I1213 11:56:40.420982  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.420996  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:40.421004  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:40.421067  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:40.445305  604010 cri.go:89] found id: ""
	I1213 11:56:40.445339  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.445349  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:40.445355  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:40.445423  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:40.470359  604010 cri.go:89] found id: ""
	I1213 11:56:40.470396  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.470406  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:40.470415  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:40.470428  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:40.529991  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:40.530029  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:40.545704  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:40.545785  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:40.614385  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:40.605002   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.605654   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.608020   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.608670   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.609867   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:40.605002   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.605654   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.608020   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.608670   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.609867   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:40.614411  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:40.614423  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:40.640189  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:40.640226  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:43.171206  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:43.187532  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:43.187604  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:43.255773  604010 cri.go:89] found id: ""
	I1213 11:56:43.255816  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.255826  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:43.255833  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:43.255893  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:43.282066  604010 cri.go:89] found id: ""
	I1213 11:56:43.282095  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.282104  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:43.282110  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:43.282169  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:43.307994  604010 cri.go:89] found id: ""
	I1213 11:56:43.308022  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.308031  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:43.308037  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:43.308094  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:43.333649  604010 cri.go:89] found id: ""
	I1213 11:56:43.333682  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.333692  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:43.333699  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:43.333761  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:43.364007  604010 cri.go:89] found id: ""
	I1213 11:56:43.364037  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.364045  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:43.364052  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:43.364110  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:43.389343  604010 cri.go:89] found id: ""
	I1213 11:56:43.389381  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.389389  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:43.389396  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:43.389466  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:43.414572  604010 cri.go:89] found id: ""
	I1213 11:56:43.414608  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.414618  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:43.414624  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:43.414711  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:43.439971  604010 cri.go:89] found id: ""
	I1213 11:56:43.439999  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.440008  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:43.440018  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:43.440034  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:43.455350  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:43.455380  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:43.518971  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:43.510133   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.510875   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.512575   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.513204   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.514989   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:43.510133   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.510875   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.512575   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.513204   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.514989   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:43.519004  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:43.519017  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:43.543826  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:43.543863  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:43.571534  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:43.571561  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:46.127908  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:46.138548  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:46.138627  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:46.177176  604010 cri.go:89] found id: ""
	I1213 11:56:46.177205  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.177214  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:46.177220  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:46.177280  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:46.250872  604010 cri.go:89] found id: ""
	I1213 11:56:46.250897  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.250906  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:46.250913  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:46.250972  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:46.276982  604010 cri.go:89] found id: ""
	I1213 11:56:46.277008  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.277020  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:46.277026  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:46.277086  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:46.308722  604010 cri.go:89] found id: ""
	I1213 11:56:46.308745  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.308754  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:46.308760  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:46.308819  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:46.333457  604010 cri.go:89] found id: ""
	I1213 11:56:46.333479  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.333488  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:46.333495  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:46.333551  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:46.361010  604010 cri.go:89] found id: ""
	I1213 11:56:46.361034  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.361042  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:46.361049  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:46.361107  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:46.385580  604010 cri.go:89] found id: ""
	I1213 11:56:46.385608  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.385625  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:46.385631  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:46.385689  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:46.410013  604010 cri.go:89] found id: ""
	I1213 11:56:46.410041  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.410050  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:46.410059  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:46.410071  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:46.474489  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:46.465232   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.465851   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.467612   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.468248   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.469990   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:46.465232   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.465851   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.467612   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.468248   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.469990   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:46.474512  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:46.474525  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:46.499926  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:46.499961  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:46.529519  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:46.529543  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:46.585780  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:46.585816  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:49.102338  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:49.113041  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:49.113164  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:49.137484  604010 cri.go:89] found id: ""
	I1213 11:56:49.137527  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.137536  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:49.137543  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:49.137633  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:49.176305  604010 cri.go:89] found id: ""
	I1213 11:56:49.176345  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.176354  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:49.176360  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:49.176445  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:49.216965  604010 cri.go:89] found id: ""
	I1213 11:56:49.216992  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.217001  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:49.217007  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:49.217076  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:49.262147  604010 cri.go:89] found id: ""
	I1213 11:56:49.262226  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.262256  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:49.262277  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:49.262367  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:49.292097  604010 cri.go:89] found id: ""
	I1213 11:56:49.292124  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.292133  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:49.292140  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:49.292195  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:49.316193  604010 cri.go:89] found id: ""
	I1213 11:56:49.316219  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.316228  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:49.316235  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:49.316293  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:49.341385  604010 cri.go:89] found id: ""
	I1213 11:56:49.341411  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.341421  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:49.341434  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:49.341503  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:49.365851  604010 cri.go:89] found id: ""
	I1213 11:56:49.365874  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.365883  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:49.365892  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:49.365903  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:49.381508  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:49.381537  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:49.444383  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:49.436163   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.436758   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.438415   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.438958   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.440549   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:49.436163   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.436758   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.438415   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.438958   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.440549   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:49.444406  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:49.444419  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:49.469593  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:49.469636  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:49.497881  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:49.497912  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:52.053968  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:52.065301  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:52.065418  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:52.096894  604010 cri.go:89] found id: ""
	I1213 11:56:52.096966  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.096988  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:52.097007  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:52.097097  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:52.124148  604010 cri.go:89] found id: ""
	I1213 11:56:52.124173  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.124186  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:52.124193  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:52.124306  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:52.160416  604010 cri.go:89] found id: ""
	I1213 11:56:52.160439  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.160448  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:52.160455  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:52.160513  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:52.200069  604010 cri.go:89] found id: ""
	I1213 11:56:52.200095  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.200104  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:52.200111  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:52.200174  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:52.263224  604010 cri.go:89] found id: ""
	I1213 11:56:52.263295  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.263310  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:52.263318  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:52.263375  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:52.288649  604010 cri.go:89] found id: ""
	I1213 11:56:52.288675  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.288684  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:52.288691  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:52.288754  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:52.316561  604010 cri.go:89] found id: ""
	I1213 11:56:52.316588  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.316596  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:52.316603  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:52.316660  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:52.341885  604010 cri.go:89] found id: ""
	I1213 11:56:52.341909  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.341918  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:52.341927  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:52.341938  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:52.397001  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:52.397038  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:52.415607  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:52.415635  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:52.493248  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:52.484194   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.484676   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.486433   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.486904   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.488650   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:52.484194   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.484676   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.486433   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.486904   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.488650   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:52.493274  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:52.493288  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:52.518551  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:52.518588  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:55.047907  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:55.059302  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:55.059421  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:55.085237  604010 cri.go:89] found id: ""
	I1213 11:56:55.085271  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.085281  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:55.085288  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:55.085362  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:55.112434  604010 cri.go:89] found id: ""
	I1213 11:56:55.112462  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.112475  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:55.112482  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:55.112544  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:55.138067  604010 cri.go:89] found id: ""
	I1213 11:56:55.138101  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.138110  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:55.138117  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:55.138184  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:55.179401  604010 cri.go:89] found id: ""
	I1213 11:56:55.179522  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.179548  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:55.179588  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:55.179766  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:55.234369  604010 cri.go:89] found id: ""
	I1213 11:56:55.234462  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.234499  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:55.234544  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:55.234676  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:55.277189  604010 cri.go:89] found id: ""
	I1213 11:56:55.277271  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.277294  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:55.277314  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:55.277416  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:55.310856  604010 cri.go:89] found id: ""
	I1213 11:56:55.310933  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.310949  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:55.310958  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:55.311020  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:55.337357  604010 cri.go:89] found id: ""
	I1213 11:56:55.337453  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.337468  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:55.337478  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:55.337490  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:55.392569  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:55.392607  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:55.408576  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:55.408608  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:55.471726  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:55.463854   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.464422   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.465928   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.466440   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.467966   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:55.463854   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.464422   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.465928   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.466440   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.467966   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:55.471749  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:55.471762  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:55.497230  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:55.497266  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:58.026521  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:58.040495  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:58.040579  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:58.067542  604010 cri.go:89] found id: ""
	I1213 11:56:58.067567  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.067576  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:58.067583  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:58.067649  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:58.092616  604010 cri.go:89] found id: ""
	I1213 11:56:58.092642  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.092651  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:58.092657  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:58.092714  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:58.117533  604010 cri.go:89] found id: ""
	I1213 11:56:58.117561  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.117572  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:58.117578  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:58.117669  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:58.143441  604010 cri.go:89] found id: ""
	I1213 11:56:58.143465  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.143474  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:58.143481  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:58.143540  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:58.191063  604010 cri.go:89] found id: ""
	I1213 11:56:58.191086  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.191096  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:58.191102  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:58.191175  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:58.233666  604010 cri.go:89] found id: ""
	I1213 11:56:58.233709  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.233727  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:58.233734  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:58.233805  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:58.285997  604010 cri.go:89] found id: ""
	I1213 11:56:58.286020  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.286029  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:58.286035  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:58.286099  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:58.313519  604010 cri.go:89] found id: ""
	I1213 11:56:58.313544  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.313553  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:58.313570  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:58.313581  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:58.372174  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:58.372208  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:58.387775  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:58.387803  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:58.457676  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:58.448571   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.449279   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.451118   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.451644   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.453241   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:58.448571   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.449279   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.451118   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.451644   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.453241   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:58.457698  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:58.457711  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:58.482922  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:58.482956  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:01.016291  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:01.027467  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:01.027540  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:01.061002  604010 cri.go:89] found id: ""
	I1213 11:57:01.061026  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.061035  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:01.061041  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:01.061099  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:01.090375  604010 cri.go:89] found id: ""
	I1213 11:57:01.090403  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.090412  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:01.090418  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:01.090476  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:01.118417  604010 cri.go:89] found id: ""
	I1213 11:57:01.118441  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.118450  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:01.118456  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:01.118521  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:01.147901  604010 cri.go:89] found id: ""
	I1213 11:57:01.147929  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.147938  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:01.147946  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:01.148009  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:01.207604  604010 cri.go:89] found id: ""
	I1213 11:57:01.207681  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.207708  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:01.207727  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:01.207818  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:01.263340  604010 cri.go:89] found id: ""
	I1213 11:57:01.263407  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.263428  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:01.263446  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:01.263531  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:01.296139  604010 cri.go:89] found id: ""
	I1213 11:57:01.296213  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.296231  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:01.296242  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:01.296313  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:01.323150  604010 cri.go:89] found id: ""
	I1213 11:57:01.323175  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.323185  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:01.323194  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:01.323206  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:01.351631  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:01.351659  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:01.410361  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:01.410398  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:01.426884  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:01.426921  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:01.495923  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:01.487940   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.488738   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.490397   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.490777   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.492041   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:01.487940   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.488738   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.490397   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.490777   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.492041   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:01.495947  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:01.495960  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:04.023306  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:04.034376  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:04.034451  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:04.058883  604010 cri.go:89] found id: ""
	I1213 11:57:04.058911  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.058921  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:04.058929  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:04.058990  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:04.084571  604010 cri.go:89] found id: ""
	I1213 11:57:04.084598  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.084607  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:04.084615  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:04.084698  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:04.111492  604010 cri.go:89] found id: ""
	I1213 11:57:04.111518  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.111527  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:04.111534  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:04.111594  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:04.140605  604010 cri.go:89] found id: ""
	I1213 11:57:04.140632  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.140641  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:04.140648  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:04.140709  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:04.170556  604010 cri.go:89] found id: ""
	I1213 11:57:04.170583  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.170592  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:04.170598  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:04.170654  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:04.221024  604010 cri.go:89] found id: ""
	I1213 11:57:04.221047  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.221056  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:04.221062  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:04.221120  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:04.258557  604010 cri.go:89] found id: ""
	I1213 11:57:04.258583  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.258601  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:04.258608  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:04.258667  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:04.286096  604010 cri.go:89] found id: ""
	I1213 11:57:04.286121  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.286130  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:04.286140  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:04.286154  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:04.342856  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:04.342892  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:04.359212  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:04.359247  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:04.426841  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:04.417916   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.418505   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.420627   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.421110   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.422742   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:04.417916   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.418505   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.420627   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.421110   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.422742   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:04.426863  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:04.426876  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:04.452958  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:04.452999  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:06.985291  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:06.996435  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:06.996506  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:07.027757  604010 cri.go:89] found id: ""
	I1213 11:57:07.027792  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.027802  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:07.027808  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:07.027875  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:07.053033  604010 cri.go:89] found id: ""
	I1213 11:57:07.053059  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.053068  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:07.053075  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:07.053135  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:07.077293  604010 cri.go:89] found id: ""
	I1213 11:57:07.077320  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.077330  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:07.077336  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:07.077400  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:07.101590  604010 cri.go:89] found id: ""
	I1213 11:57:07.101615  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.101630  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:07.101636  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:07.101693  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:07.129837  604010 cri.go:89] found id: ""
	I1213 11:57:07.129867  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.129877  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:07.129883  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:07.129943  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:07.155693  604010 cri.go:89] found id: ""
	I1213 11:57:07.155719  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.155729  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:07.155735  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:07.155799  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:07.208290  604010 cri.go:89] found id: ""
	I1213 11:57:07.208318  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.208327  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:07.208334  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:07.208398  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:07.260450  604010 cri.go:89] found id: ""
	I1213 11:57:07.260475  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.260485  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:07.260494  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:07.260505  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:07.317882  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:07.317918  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:07.334495  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:07.334524  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:07.403490  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:07.393965   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.394975   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.396603   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.397190   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.398983   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:07.393965   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.394975   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.396603   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.397190   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.398983   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:07.403516  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:07.403531  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:07.428864  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:07.428901  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:09.962852  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:09.973890  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:09.973963  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:10.008764  604010 cri.go:89] found id: ""
	I1213 11:57:10.008791  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.008801  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:10.008808  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:10.008881  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:10.042627  604010 cri.go:89] found id: ""
	I1213 11:57:10.042655  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.042667  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:10.042674  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:10.042762  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:10.070196  604010 cri.go:89] found id: ""
	I1213 11:57:10.070222  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.070231  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:10.070238  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:10.070304  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:10.097458  604010 cri.go:89] found id: ""
	I1213 11:57:10.097484  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.097493  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:10.097500  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:10.097559  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:10.124061  604010 cri.go:89] found id: ""
	I1213 11:57:10.124087  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.124095  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:10.124101  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:10.124158  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:10.153659  604010 cri.go:89] found id: ""
	I1213 11:57:10.153696  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.153705  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:10.153713  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:10.153792  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:10.226910  604010 cri.go:89] found id: ""
	I1213 11:57:10.226938  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.226947  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:10.226953  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:10.227010  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:10.265652  604010 cri.go:89] found id: ""
	I1213 11:57:10.265676  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.265685  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:10.265695  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:10.265707  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:10.332797  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:10.323569   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.325115   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.325998   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.326908   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.328530   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:10.323569   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.325115   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.325998   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.326908   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.328530   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:10.332820  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:10.332832  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:10.357553  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:10.357592  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:10.391809  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:10.391838  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:10.447255  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:10.447293  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:12.963670  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:12.974670  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:12.974767  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:13.006230  604010 cri.go:89] found id: ""
	I1213 11:57:13.006259  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.006268  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:13.006275  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:13.006340  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:13.031301  604010 cri.go:89] found id: ""
	I1213 11:57:13.031325  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.031334  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:13.031340  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:13.031396  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:13.055897  604010 cri.go:89] found id: ""
	I1213 11:57:13.055927  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.055936  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:13.055942  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:13.056003  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:13.081708  604010 cri.go:89] found id: ""
	I1213 11:57:13.081733  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.081748  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:13.081755  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:13.081812  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:13.111812  604010 cri.go:89] found id: ""
	I1213 11:57:13.111885  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.111900  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:13.111909  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:13.111971  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:13.136957  604010 cri.go:89] found id: ""
	I1213 11:57:13.136992  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.137001  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:13.137025  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:13.137099  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:13.180320  604010 cri.go:89] found id: ""
	I1213 11:57:13.180354  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.180363  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:13.180370  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:13.180438  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:13.232992  604010 cri.go:89] found id: ""
	I1213 11:57:13.233027  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.233037  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:13.233047  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:13.233060  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:13.306234  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:13.297958   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.298476   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.299586   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.299955   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.301394   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:13.297958   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.298476   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.299586   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.299955   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.301394   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:13.306257  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:13.306272  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:13.331798  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:13.331837  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:13.364219  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:13.364248  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:13.419158  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:13.419191  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:15.935716  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:15.946701  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:15.946796  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:15.972298  604010 cri.go:89] found id: ""
	I1213 11:57:15.972375  604010 logs.go:282] 0 containers: []
	W1213 11:57:15.972392  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:15.972399  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:15.972468  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:15.997435  604010 cri.go:89] found id: ""
	I1213 11:57:15.997458  604010 logs.go:282] 0 containers: []
	W1213 11:57:15.997467  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:15.997474  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:15.997540  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:16.026069  604010 cri.go:89] found id: ""
	I1213 11:57:16.026107  604010 logs.go:282] 0 containers: []
	W1213 11:57:16.026116  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:16.026123  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:16.026190  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:16.051047  604010 cri.go:89] found id: ""
	I1213 11:57:16.051125  604010 logs.go:282] 0 containers: []
	W1213 11:57:16.051141  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:16.051149  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:16.051209  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:16.076992  604010 cri.go:89] found id: ""
	I1213 11:57:16.077060  604010 logs.go:282] 0 containers: []
	W1213 11:57:16.077086  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:16.077104  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:16.077190  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:16.104719  604010 cri.go:89] found id: ""
	I1213 11:57:16.104788  604010 logs.go:282] 0 containers: []
	W1213 11:57:16.104811  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:16.104830  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:16.104918  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:16.136668  604010 cri.go:89] found id: ""
	I1213 11:57:16.136696  604010 logs.go:282] 0 containers: []
	W1213 11:57:16.136705  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:16.136712  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:16.136772  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:16.184065  604010 cri.go:89] found id: ""
	I1213 11:57:16.184100  604010 logs.go:282] 0 containers: []
	W1213 11:57:16.184111  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:16.184120  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:16.184153  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:16.270928  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:16.270968  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:16.287140  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:16.287175  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:16.357398  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:16.349038   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.349516   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.351357   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.351864   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.353557   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:16.349038   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.349516   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.351357   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.351864   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.353557   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:16.357423  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:16.357435  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:16.381740  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:16.381774  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:18.910619  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:18.921087  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:18.921166  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:18.946478  604010 cri.go:89] found id: ""
	I1213 11:57:18.946503  604010 logs.go:282] 0 containers: []
	W1213 11:57:18.946512  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:18.946519  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:18.946578  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:18.971279  604010 cri.go:89] found id: ""
	I1213 11:57:18.971304  604010 logs.go:282] 0 containers: []
	W1213 11:57:18.971313  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:18.971320  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:18.971378  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:18.996033  604010 cri.go:89] found id: ""
	I1213 11:57:18.996059  604010 logs.go:282] 0 containers: []
	W1213 11:57:18.996068  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:18.996074  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:18.996158  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:19.021977  604010 cri.go:89] found id: ""
	I1213 11:57:19.022006  604010 logs.go:282] 0 containers: []
	W1213 11:57:19.022015  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:19.022024  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:19.022086  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:19.046193  604010 cri.go:89] found id: ""
	I1213 11:57:19.046221  604010 logs.go:282] 0 containers: []
	W1213 11:57:19.046230  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:19.046236  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:19.046297  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:19.070868  604010 cri.go:89] found id: ""
	I1213 11:57:19.070895  604010 logs.go:282] 0 containers: []
	W1213 11:57:19.070904  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:19.070911  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:19.071001  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:19.096253  604010 cri.go:89] found id: ""
	I1213 11:57:19.096276  604010 logs.go:282] 0 containers: []
	W1213 11:57:19.096285  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:19.096292  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:19.096373  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:19.121131  604010 cri.go:89] found id: ""
	I1213 11:57:19.121167  604010 logs.go:282] 0 containers: []
	W1213 11:57:19.121177  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:19.121186  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:19.121216  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:19.208507  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:19.190547   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.191444   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.193889   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.194572   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.199234   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:19.190547   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.191444   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.193889   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.194572   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.199234   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:19.208539  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:19.208553  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:19.237572  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:19.237656  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:19.276423  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:19.276448  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:19.334610  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:19.334648  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:21.851744  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:21.861936  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:21.861999  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:21.885880  604010 cri.go:89] found id: ""
	I1213 11:57:21.885901  604010 logs.go:282] 0 containers: []
	W1213 11:57:21.885909  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:21.885916  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:21.885971  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:21.909866  604010 cri.go:89] found id: ""
	I1213 11:57:21.909889  604010 logs.go:282] 0 containers: []
	W1213 11:57:21.909898  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:21.909904  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:21.909961  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:21.934547  604010 cri.go:89] found id: ""
	I1213 11:57:21.934576  604010 logs.go:282] 0 containers: []
	W1213 11:57:21.934585  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:21.934591  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:21.934651  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:21.959889  604010 cri.go:89] found id: ""
	I1213 11:57:21.959915  604010 logs.go:282] 0 containers: []
	W1213 11:57:21.959925  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:21.959932  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:21.959988  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:21.989023  604010 cri.go:89] found id: ""
	I1213 11:57:21.989099  604010 logs.go:282] 0 containers: []
	W1213 11:57:21.989134  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:21.989159  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:21.989243  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:22.019806  604010 cri.go:89] found id: ""
	I1213 11:57:22.019848  604010 logs.go:282] 0 containers: []
	W1213 11:57:22.019861  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:22.019868  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:22.019934  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:22.044814  604010 cri.go:89] found id: ""
	I1213 11:57:22.044841  604010 logs.go:282] 0 containers: []
	W1213 11:57:22.044852  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:22.044858  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:22.044923  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:22.074682  604010 cri.go:89] found id: ""
	I1213 11:57:22.074726  604010 logs.go:282] 0 containers: []
	W1213 11:57:22.074735  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:22.074745  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:22.074757  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:22.150025  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:22.141291   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.141746   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.143484   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.144157   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.146009   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:22.141291   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.141746   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.143484   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.144157   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.146009   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:22.150049  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:22.150062  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:22.178881  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:22.178917  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:22.216709  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:22.216740  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:22.281457  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:22.281489  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:24.798312  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:24.808695  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:24.808764  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:24.835809  604010 cri.go:89] found id: ""
	I1213 11:57:24.835839  604010 logs.go:282] 0 containers: []
	W1213 11:57:24.835848  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:24.835855  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:24.835913  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:24.864535  604010 cri.go:89] found id: ""
	I1213 11:57:24.864560  604010 logs.go:282] 0 containers: []
	W1213 11:57:24.864568  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:24.864574  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:24.864630  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:24.894267  604010 cri.go:89] found id: ""
	I1213 11:57:24.894290  604010 logs.go:282] 0 containers: []
	W1213 11:57:24.894299  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:24.894305  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:24.894364  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:24.923204  604010 cri.go:89] found id: ""
	I1213 11:57:24.923237  604010 logs.go:282] 0 containers: []
	W1213 11:57:24.923248  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:24.923254  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:24.923313  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:24.957663  604010 cri.go:89] found id: ""
	I1213 11:57:24.957689  604010 logs.go:282] 0 containers: []
	W1213 11:57:24.957698  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:24.957705  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:24.957786  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:24.982499  604010 cri.go:89] found id: ""
	I1213 11:57:24.982524  604010 logs.go:282] 0 containers: []
	W1213 11:57:24.982533  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:24.982539  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:24.982596  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:25.013305  604010 cri.go:89] found id: ""
	I1213 11:57:25.013332  604010 logs.go:282] 0 containers: []
	W1213 11:57:25.013342  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:25.013348  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:25.013426  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:25.042403  604010 cri.go:89] found id: ""
	I1213 11:57:25.042429  604010 logs.go:282] 0 containers: []
	W1213 11:57:25.042440  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:25.042450  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:25.042462  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:25.110074  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:25.100728   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.101372   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.103156   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.103840   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.106138   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:25.100728   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.101372   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.103156   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.103840   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.106138   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:25.110097  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:25.110109  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:25.136135  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:25.136175  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:25.187750  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:25.187781  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:25.269417  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:25.269496  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:27.795410  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:27.806308  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:27.806393  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:27.833178  604010 cri.go:89] found id: ""
	I1213 11:57:27.833204  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.833213  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:27.833220  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:27.833280  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:27.864759  604010 cri.go:89] found id: ""
	I1213 11:57:27.864790  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.864800  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:27.864807  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:27.864870  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:27.894576  604010 cri.go:89] found id: ""
	I1213 11:57:27.894643  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.894668  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:27.894722  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:27.894809  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:27.919695  604010 cri.go:89] found id: ""
	I1213 11:57:27.919720  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.919728  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:27.919735  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:27.919809  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:27.944128  604010 cri.go:89] found id: ""
	I1213 11:57:27.944152  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.944161  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:27.944168  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:27.944247  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:27.968369  604010 cri.go:89] found id: ""
	I1213 11:57:27.968393  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.968402  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:27.968409  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:27.968507  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:27.997345  604010 cri.go:89] found id: ""
	I1213 11:57:27.997372  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.997381  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:27.997388  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:27.997451  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:28.029787  604010 cri.go:89] found id: ""
	I1213 11:57:28.029815  604010 logs.go:282] 0 containers: []
	W1213 11:57:28.029825  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:28.029837  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:28.029851  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:28.059897  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:28.059930  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:28.116398  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:28.116433  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:28.133239  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:28.133269  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:28.257725  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:28.249038   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.249625   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.251202   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.251730   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.253377   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:28.249038   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.249625   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.251202   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.251730   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.253377   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:28.257746  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:28.257758  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:30.784544  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:30.795049  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:30.795122  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:30.819394  604010 cri.go:89] found id: ""
	I1213 11:57:30.819419  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.819427  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:30.819434  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:30.819491  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:30.843159  604010 cri.go:89] found id: ""
	I1213 11:57:30.843184  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.843193  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:30.843199  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:30.843254  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:30.869845  604010 cri.go:89] found id: ""
	I1213 11:57:30.869867  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.869876  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:30.869885  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:30.869941  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:30.896812  604010 cri.go:89] found id: ""
	I1213 11:57:30.896836  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.896845  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:30.896853  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:30.896913  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:30.921770  604010 cri.go:89] found id: ""
	I1213 11:57:30.921794  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.921804  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:30.921810  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:30.921867  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:30.948842  604010 cri.go:89] found id: ""
	I1213 11:57:30.948869  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.948878  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:30.948885  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:30.948941  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:30.975761  604010 cri.go:89] found id: ""
	I1213 11:57:30.975785  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.975794  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:30.975800  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:30.975861  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:31.009297  604010 cri.go:89] found id: ""
	I1213 11:57:31.009324  604010 logs.go:282] 0 containers: []
	W1213 11:57:31.009333  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:31.009344  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:31.009357  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:31.026148  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:31.026228  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:31.092501  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:31.083099   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.083809   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.085589   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.086335   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.087969   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:31.083099   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.083809   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.085589   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.086335   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.087969   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:31.092527  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:31.092540  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:31.119062  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:31.119100  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:31.148109  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:31.148140  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:33.733415  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:33.744879  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:33.744947  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:33.769975  604010 cri.go:89] found id: ""
	I1213 11:57:33.770002  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.770012  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:33.770019  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:33.770118  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:33.795564  604010 cri.go:89] found id: ""
	I1213 11:57:33.795587  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.795595  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:33.795602  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:33.795658  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:33.820165  604010 cri.go:89] found id: ""
	I1213 11:57:33.820189  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.820197  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:33.820205  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:33.820266  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:33.850474  604010 cri.go:89] found id: ""
	I1213 11:57:33.850496  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.850504  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:33.850511  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:33.850571  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:33.875577  604010 cri.go:89] found id: ""
	I1213 11:57:33.875599  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.875613  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:33.875620  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:33.875676  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:33.899672  604010 cri.go:89] found id: ""
	I1213 11:57:33.899696  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.899704  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:33.899711  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:33.899771  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:33.924330  604010 cri.go:89] found id: ""
	I1213 11:57:33.924353  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.924363  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:33.924369  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:33.924426  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:33.948447  604010 cri.go:89] found id: ""
	I1213 11:57:33.948470  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.948479  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:33.948489  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:33.948500  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:34.007962  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:34.008002  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:34.025302  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:34.025333  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:34.092523  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:34.083642   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.084406   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.086056   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.086792   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.088528   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:34.083642   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.084406   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.086056   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.086792   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.088528   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:34.092559  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:34.092571  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:34.118672  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:34.118743  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:36.651173  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:36.662055  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:36.662135  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:36.690956  604010 cri.go:89] found id: ""
	I1213 11:57:36.690981  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.690990  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:36.690997  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:36.691067  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:36.716966  604010 cri.go:89] found id: ""
	I1213 11:57:36.716989  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.716998  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:36.717004  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:36.717063  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:36.741609  604010 cri.go:89] found id: ""
	I1213 11:57:36.741651  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.741661  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:36.741667  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:36.741736  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:36.766862  604010 cri.go:89] found id: ""
	I1213 11:57:36.766898  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.766907  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:36.766914  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:36.766978  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:36.792075  604010 cri.go:89] found id: ""
	I1213 11:57:36.792103  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.792112  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:36.792119  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:36.792198  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:36.817506  604010 cri.go:89] found id: ""
	I1213 11:57:36.817540  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.817549  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:36.817558  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:36.817624  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:36.842603  604010 cri.go:89] found id: ""
	I1213 11:57:36.842627  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.842635  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:36.842641  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:36.842721  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:36.868253  604010 cri.go:89] found id: ""
	I1213 11:57:36.868276  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.868286  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:36.868295  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:36.868307  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:36.925033  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:36.925067  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:36.941121  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:36.941202  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:37.010945  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:36.998940   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.000295   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.000838   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.002747   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.003200   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:36.998940   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.000295   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.000838   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.002747   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.003200   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:37.010971  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:37.010986  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:37.039679  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:37.039717  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:39.569521  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:39.580209  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:39.580283  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:39.607577  604010 cri.go:89] found id: ""
	I1213 11:57:39.607609  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.607618  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:39.607625  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:39.607684  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:39.632984  604010 cri.go:89] found id: ""
	I1213 11:57:39.633007  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.633016  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:39.633022  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:39.633079  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:39.660977  604010 cri.go:89] found id: ""
	I1213 11:57:39.661006  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.661016  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:39.661022  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:39.661083  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:39.685387  604010 cri.go:89] found id: ""
	I1213 11:57:39.685414  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.685423  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:39.685430  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:39.685488  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:39.711315  604010 cri.go:89] found id: ""
	I1213 11:57:39.711354  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.711364  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:39.711370  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:39.711434  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:39.736665  604010 cri.go:89] found id: ""
	I1213 11:57:39.736691  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.736700  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:39.736707  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:39.736765  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:39.761215  604010 cri.go:89] found id: ""
	I1213 11:57:39.761240  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.761250  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:39.761257  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:39.761317  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:39.785612  604010 cri.go:89] found id: ""
	I1213 11:57:39.785635  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.785667  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:39.785677  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:39.785688  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:39.818169  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:39.818198  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:39.876172  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:39.876207  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:39.893614  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:39.893697  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:39.961561  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:39.953062   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.953798   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.955462   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.955793   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.957262   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:39.953062   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.953798   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.955462   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.955793   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.957262   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:39.961582  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:39.961598  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:42.487536  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:42.498423  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:42.498495  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:42.526754  604010 cri.go:89] found id: ""
	I1213 11:57:42.526784  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.526793  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:42.526800  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:42.526866  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:42.557909  604010 cri.go:89] found id: ""
	I1213 11:57:42.557938  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.557948  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:42.557955  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:42.558012  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:42.583283  604010 cri.go:89] found id: ""
	I1213 11:57:42.583311  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.583319  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:42.583325  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:42.583417  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:42.612201  604010 cri.go:89] found id: ""
	I1213 11:57:42.612228  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.612238  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:42.612244  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:42.612304  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:42.636897  604010 cri.go:89] found id: ""
	I1213 11:57:42.636926  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.636935  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:42.636942  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:42.637003  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:42.662077  604010 cri.go:89] found id: ""
	I1213 11:57:42.662101  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.662109  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:42.662116  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:42.662181  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:42.689090  604010 cri.go:89] found id: ""
	I1213 11:57:42.689117  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.689126  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:42.689132  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:42.689194  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:42.714186  604010 cri.go:89] found id: ""
	I1213 11:57:42.714220  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.714229  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:42.714239  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:42.714253  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:42.730012  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:42.730043  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:42.793528  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:42.784227   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.785106   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.787066   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.787860   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.789513   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:42.784227   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.785106   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.787066   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.787860   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.789513   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:42.793550  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:42.793562  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:42.820504  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:42.820540  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:42.850739  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:42.850772  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:45.416253  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:45.428104  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:45.428174  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:45.486919  604010 cri.go:89] found id: ""
	I1213 11:57:45.486943  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.486952  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:45.486959  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:45.487018  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:45.518438  604010 cri.go:89] found id: ""
	I1213 11:57:45.518466  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.518475  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:45.518482  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:45.518539  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:45.543147  604010 cri.go:89] found id: ""
	I1213 11:57:45.543174  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.543183  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:45.543189  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:45.543247  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:45.568184  604010 cri.go:89] found id: ""
	I1213 11:57:45.568210  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.568219  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:45.568226  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:45.568283  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:45.597036  604010 cri.go:89] found id: ""
	I1213 11:57:45.597062  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.597072  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:45.597078  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:45.597140  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:45.625538  604010 cri.go:89] found id: ""
	I1213 11:57:45.625563  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.625572  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:45.625579  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:45.625664  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:45.650305  604010 cri.go:89] found id: ""
	I1213 11:57:45.650340  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.650350  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:45.650356  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:45.650415  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:45.674642  604010 cri.go:89] found id: ""
	I1213 11:57:45.674668  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.674677  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:45.674723  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:45.674736  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:45.737984  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:45.729194   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.729808   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.731387   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.731876   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.733423   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:45.729194   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.729808   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.731387   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.731876   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.733423   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:45.738014  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:45.738030  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:45.764253  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:45.764293  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:45.794872  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:45.794900  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:45.852148  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:45.852181  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:48.369680  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:48.381452  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:48.381527  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:48.406963  604010 cri.go:89] found id: ""
	I1213 11:57:48.406989  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.406998  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:48.407004  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:48.407069  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:48.453016  604010 cri.go:89] found id: ""
	I1213 11:57:48.453043  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.453052  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:48.453060  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:48.453120  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:48.512775  604010 cri.go:89] found id: ""
	I1213 11:57:48.512806  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.512815  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:48.512821  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:48.512879  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:48.538032  604010 cri.go:89] found id: ""
	I1213 11:57:48.538055  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.538064  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:48.538070  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:48.538129  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:48.562781  604010 cri.go:89] found id: ""
	I1213 11:57:48.562815  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.562831  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:48.562841  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:48.562899  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:48.592224  604010 cri.go:89] found id: ""
	I1213 11:57:48.592249  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.592258  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:48.592265  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:48.592324  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:48.616499  604010 cri.go:89] found id: ""
	I1213 11:57:48.616524  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.616533  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:48.616540  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:48.616604  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:48.641140  604010 cri.go:89] found id: ""
	I1213 11:57:48.641164  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.641173  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:48.641183  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:48.641193  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:48.667031  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:48.667069  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:48.696402  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:48.696431  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:48.752046  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:48.752080  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:48.768352  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:48.768382  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:48.835752  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:48.828038   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.828514   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.830127   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.830542   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.831979   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:48.828038   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.828514   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.830127   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.830542   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.831979   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:51.337160  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:51.349596  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:51.349697  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:51.384310  604010 cri.go:89] found id: ""
	I1213 11:57:51.384341  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.384350  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:51.384358  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:51.384415  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:51.409502  604010 cri.go:89] found id: ""
	I1213 11:57:51.409523  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.409532  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:51.409539  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:51.409595  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:51.444866  604010 cri.go:89] found id: ""
	I1213 11:57:51.444887  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.444896  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:51.444901  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:51.444957  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:51.498878  604010 cri.go:89] found id: ""
	I1213 11:57:51.498900  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.498908  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:51.498915  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:51.498970  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:51.532054  604010 cri.go:89] found id: ""
	I1213 11:57:51.532082  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.532091  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:51.532098  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:51.532159  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:51.561798  604010 cri.go:89] found id: ""
	I1213 11:57:51.561833  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.561842  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:51.561849  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:51.561906  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:51.586723  604010 cri.go:89] found id: ""
	I1213 11:57:51.586798  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.586820  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:51.586843  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:51.586951  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:51.612513  604010 cri.go:89] found id: ""
	I1213 11:57:51.612538  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.612547  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:51.612557  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:51.612569  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:51.628622  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:51.628650  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:51.699783  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:51.691237   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.691797   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.693193   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.693944   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.695717   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:51.691237   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.691797   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.693193   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.693944   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.695717   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:51.699815  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:51.699832  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:51.725055  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:51.725092  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:51.758574  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:51.758604  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:54.315140  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:54.325600  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:54.325693  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:54.352056  604010 cri.go:89] found id: ""
	I1213 11:57:54.352081  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.352089  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:54.352096  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:54.352157  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:54.375586  604010 cri.go:89] found id: ""
	I1213 11:57:54.375611  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.375620  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:54.375626  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:54.375683  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:54.399138  604010 cri.go:89] found id: ""
	I1213 11:57:54.399163  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.399172  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:54.399178  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:54.399234  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:54.439999  604010 cri.go:89] found id: ""
	I1213 11:57:54.440025  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.440033  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:54.440039  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:54.440096  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:54.505093  604010 cri.go:89] found id: ""
	I1213 11:57:54.505124  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.505133  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:54.505140  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:54.505198  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:54.529921  604010 cri.go:89] found id: ""
	I1213 11:57:54.529947  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.529956  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:54.529966  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:54.530029  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:54.556363  604010 cri.go:89] found id: ""
	I1213 11:57:54.556390  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.556399  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:54.556406  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:54.556483  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:54.581531  604010 cri.go:89] found id: ""
	I1213 11:57:54.581556  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.581565  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:54.581574  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:54.581603  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:54.637009  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:54.637043  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:54.652919  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:54.652949  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:54.717113  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:54.708684   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.709580   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.711317   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.711640   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.713137   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:54.708684   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.709580   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.711317   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.711640   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.713137   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:54.717134  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:54.717148  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:54.743116  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:54.743151  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:57.272010  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:57.285875  604010 out.go:203] 
	W1213 11:57:57.288788  604010 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 11:57:57.288838  604010 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 11:57:57.288853  604010 out.go:285] * Related issues:
	W1213 11:57:57.288872  604010 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1213 11:57:57.288889  604010 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1213 11:57:57.291728  604010 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355817742Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355832504Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355869739Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355890810Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355900722Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355913464Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355922515Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355936029Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355951643Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355983734Z" level=info msg="Connect containerd service"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.356248656Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.356827911Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.372443055Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.372505251Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.372539417Z" level=info msg="Start subscribing containerd event"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.372587426Z" level=info msg="Start recovering state"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.413846470Z" level=info msg="Start event monitor"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.413904095Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.413916928Z" level=info msg="Start streaming server"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.413926332Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.413934643Z" level=info msg="runtime interface starting up..."
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.413940961Z" level=info msg="starting plugins..."
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.413972059Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 11:51:54 newest-cni-796924 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.415701136Z" level=info msg="containerd successfully booted in 0.081179s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:58:06.841381   13769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:58:06.841896   13769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:58:06.843499   13769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:58:06.843905   13769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:58:06.845127   13769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +12.907693] overlayfs: idmapped layers are currently not supported
	[Dec13 09:25] overlayfs: idmapped layers are currently not supported
	[ +26.192425] overlayfs: idmapped layers are currently not supported
	[Dec13 09:26] overlayfs: idmapped layers are currently not supported
	[ +25.729788] overlayfs: idmapped layers are currently not supported
	[Dec13 09:27] overlayfs: idmapped layers are currently not supported
	[Dec13 09:28] overlayfs: idmapped layers are currently not supported
	[Dec13 09:31] overlayfs: idmapped layers are currently not supported
	[Dec13 09:32] overlayfs: idmapped layers are currently not supported
	[ +17.979093] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec13 09:43] overlayfs: idmapped layers are currently not supported
	[Dec13 09:45] overlayfs: idmapped layers are currently not supported
	[ +25.885102] overlayfs: idmapped layers are currently not supported
	[Dec13 09:46] overlayfs: idmapped layers are currently not supported
	[ +22.078149] overlayfs: idmapped layers are currently not supported
	[Dec13 09:47] overlayfs: idmapped layers are currently not supported
	[Dec13 09:48] overlayfs: idmapped layers are currently not supported
	[Dec13 09:49] overlayfs: idmapped layers are currently not supported
	[Dec13 09:51] overlayfs: idmapped layers are currently not supported
	[ +17.043564] overlayfs: idmapped layers are currently not supported
	[Dec13 09:52] overlayfs: idmapped layers are currently not supported
	[Dec13 09:53] overlayfs: idmapped layers are currently not supported
	[Dec13 10:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:19] hrtimer: interrupt took 21247146 ns
	
	
	==> kernel <==
	 11:58:06 up  4:40,  0 user,  load average: 0.86, 0.90, 1.24
	Linux newest-cni-796924 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 11:58:03 newest-cni-796924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:58:03 newest-cni-796924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:58:03 newest-cni-796924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:58:04 newest-cni-796924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:58:04 newest-cni-796924 kubelet[13617]: E1213 11:58:04.338818   13617 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:58:04 newest-cni-796924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:58:04 newest-cni-796924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:58:05 newest-cni-796924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 13 11:58:05 newest-cni-796924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:58:05 newest-cni-796924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:58:05 newest-cni-796924 kubelet[13660]: E1213 11:58:05.267352   13660 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:58:05 newest-cni-796924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:58:05 newest-cni-796924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:58:05 newest-cni-796924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	Dec 13 11:58:05 newest-cni-796924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:58:05 newest-cni-796924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:58:06 newest-cni-796924 kubelet[13673]: E1213 11:58:06.030547   13673 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:58:06 newest-cni-796924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:58:06 newest-cni-796924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:58:06 newest-cni-796924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
	Dec 13 11:58:06 newest-cni-796924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:58:06 newest-cni-796924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:58:06 newest-cni-796924 kubelet[13750]: E1213 11:58:06.751450   13750 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:58:06 newest-cni-796924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:58:06 newest-cni-796924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-796924 -n newest-cni-796924
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-796924 -n newest-cni-796924: exit status 2 (333.516208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-796924" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-796924
helpers_test.go:244: (dbg) docker inspect newest-cni-796924:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273",
	        "Created": "2025-12-13T11:41:45.560617227Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 604142,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:51:48.770524373Z",
	            "FinishedAt": "2025-12-13T11:51:47.382046067Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273/hostname",
	        "HostsPath": "/var/lib/docker/containers/27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273/hosts",
	        "LogPath": "/var/lib/docker/containers/27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273/27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273-json.log",
	        "Name": "/newest-cni-796924",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-796924:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-796924",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "27aba94e8ede7488df815559f44c17888a5d8cf06635a1b1d81eb33d3d933273",
	                "LowerDir": "/var/lib/docker/overlay2/e3273dd39227f26c709bd7f69a32b4360acb8650246414f8098119d5f28e0f70-init/diff:/var/lib/docker/overlay2/5d0ca86320f2c06765995d644a73af30cd6ddb36f5c1a2a6b1ebb24af53cdabd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e3273dd39227f26c709bd7f69a32b4360acb8650246414f8098119d5f28e0f70/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e3273dd39227f26c709bd7f69a32b4360acb8650246414f8098119d5f28e0f70/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e3273dd39227f26c709bd7f69a32b4360acb8650246414f8098119d5f28e0f70/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-796924",
	                "Source": "/var/lib/docker/volumes/newest-cni-796924/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-796924",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-796924",
	                "name.minikube.sigs.k8s.io": "newest-cni-796924",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b9bb40aac9de7cd1274edecaff0f8eaf098acb0d5c0799c0a940ae7311a572ff",
	            "SandboxKey": "/var/run/docker/netns/b9bb40aac9de",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-796924": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:8b:15:a0:38:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "524b54a7afb58fdfadc2532a94da198ca12aafc23248ec4905999b39dfe064e0",
	                    "EndpointID": "b589d458f24f437f5bf8379bb70662db004fdd873d4df2f7211ededbab3c7988",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-796924",
	                        "27aba94e8ede"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-796924 -n newest-cni-796924
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-796924 -n newest-cni-796924: exit status 2 (335.776523ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-796924 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-796924 logs -n 25: (1.623945417s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p embed-certs-951675                                                                                                                                                                                                                                      │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p embed-certs-951675                                                                                                                                                                                                                                      │ embed-certs-951675           │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ delete  │ -p disable-driver-mounts-823668                                                                                                                                                                                                                            │ disable-driver-mounts-823668 │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:39 UTC │
	│ start   │ -p default-k8s-diff-port-191845 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:39 UTC │ 13 Dec 25 11:40 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-191845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ stop    │ -p default-k8s-diff-port-191845 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-191845 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:40 UTC │
	│ start   │ -p default-k8s-diff-port-191845 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:40 UTC │ 13 Dec 25 11:41 UTC │
	│ image   │ default-k8s-diff-port-191845 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ pause   │ -p default-k8s-diff-port-191845 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ unpause │ -p default-k8s-diff-port-191845 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ delete  │ -p default-k8s-diff-port-191845                                                                                                                                                                                                                            │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ delete  │ -p default-k8s-diff-port-191845                                                                                                                                                                                                                            │ default-k8s-diff-port-191845 │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │ 13 Dec 25 11:41 UTC │
	│ start   │ -p newest-cni-796924 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:41 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-333352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ stop    │ -p no-preload-333352 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ addons  │ enable dashboard -p no-preload-333352 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ start   │ -p no-preload-333352 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-333352            │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-796924 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │                     │
	│ stop    │ -p newest-cni-796924 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p newest-cni-796924 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p newest-cni-796924 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │                     │
	│ image   │ newest-cni-796924 image list --format=json                                                                                                                                                                                                                 │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:58 UTC │ 13 Dec 25 11:58 UTC │
	│ pause   │ -p newest-cni-796924 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:58 UTC │ 13 Dec 25 11:58 UTC │
	│ unpause │ -p newest-cni-796924 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-796924            │ jenkins │ v1.37.0 │ 13 Dec 25 11:58 UTC │ 13 Dec 25 11:58 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:51:48
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:51:48.463604  604010 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:51:48.463796  604010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:51:48.463823  604010 out.go:374] Setting ErrFile to fd 2...
	I1213 11:51:48.463842  604010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:51:48.464235  604010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 11:51:48.465119  604010 out.go:368] Setting JSON to false
	I1213 11:51:48.466102  604010 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":16461,"bootTime":1765610247,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 11:51:48.466204  604010 start.go:143] virtualization:  
	I1213 11:51:48.469444  604010 out.go:179] * [newest-cni-796924] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:51:48.473497  604010 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:51:48.473608  604010 notify.go:221] Checking for updates...
	I1213 11:51:48.479464  604010 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:51:48.482541  604010 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:51:48.485448  604010 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 11:51:48.488462  604010 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:51:48.491424  604010 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:51:48.494980  604010 config.go:182] Loaded profile config "newest-cni-796924": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:51:48.495553  604010 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:51:48.518013  604010 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:51:48.518194  604010 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:51:48.596406  604010 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:51:48.586781308 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:51:48.596541  604010 docker.go:319] overlay module found
	I1213 11:51:48.599865  604010 out.go:179] * Using the docker driver based on existing profile
	I1213 11:51:48.602647  604010 start.go:309] selected driver: docker
	I1213 11:51:48.602672  604010 start.go:927] validating driver "docker" against &{Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:51:48.602834  604010 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:51:48.603569  604010 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:51:48.671569  604010 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:51:48.654666754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:51:48.671930  604010 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 11:51:48.671965  604010 cni.go:84] Creating CNI manager for ""
	I1213 11:51:48.672022  604010 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:51:48.672078  604010 start.go:353] cluster config:
	{Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:51:48.675265  604010 out.go:179] * Starting "newest-cni-796924" primary control-plane node in "newest-cni-796924" cluster
	I1213 11:51:48.678207  604010 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 11:51:48.681114  604010 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:51:48.683920  604010 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:51:48.683976  604010 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 11:51:48.683989  604010 cache.go:65] Caching tarball of preloaded images
	I1213 11:51:48.684102  604010 preload.go:238] Found /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 11:51:48.684116  604010 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 11:51:48.684232  604010 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/config.json ...
	I1213 11:51:48.684464  604010 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:51:48.711458  604010 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:51:48.711481  604010 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:51:48.711496  604010 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:51:48.711527  604010 start.go:360] acquireMachinesLock for newest-cni-796924: {Name:mkb23dc851632c47983afd0f3cb215d071a4c6d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:51:48.711588  604010 start.go:364] duration metric: took 38.818µs to acquireMachinesLock for "newest-cni-796924"
	I1213 11:51:48.711608  604010 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:51:48.711613  604010 fix.go:54] fixHost starting: 
	I1213 11:51:48.711888  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:48.735758  604010 fix.go:112] recreateIfNeeded on newest-cni-796924: state=Stopped err=<nil>
	W1213 11:51:48.735799  604010 fix.go:138] unexpected machine state, will restart: <nil>
	W1213 11:51:48.171125  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:50.670988  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:51:48.739083  604010 out.go:252] * Restarting existing docker container for "newest-cni-796924" ...
	I1213 11:51:48.739191  604010 cli_runner.go:164] Run: docker start newest-cni-796924
	I1213 11:51:48.989234  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:49.013708  604010 kic.go:430] container "newest-cni-796924" state is running.
	I1213 11:51:49.014143  604010 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:51:49.035818  604010 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/config.json ...
	I1213 11:51:49.036044  604010 machine.go:94] provisionDockerMachine start ...
	I1213 11:51:49.036107  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:49.066663  604010 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:49.067143  604010 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1213 11:51:49.067157  604010 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:51:49.067832  604010 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47590->127.0.0.1:33440: read: connection reset by peer
	I1213 11:51:52.226322  604010 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-796924
	
	I1213 11:51:52.226353  604010 ubuntu.go:182] provisioning hostname "newest-cni-796924"
	I1213 11:51:52.226417  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:52.244890  604010 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:52.245240  604010 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1213 11:51:52.245259  604010 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-796924 && echo "newest-cni-796924" | sudo tee /etc/hostname
	I1213 11:51:52.409909  604010 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-796924
	
	I1213 11:51:52.410005  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:52.440908  604010 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:52.441219  604010 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1213 11:51:52.441235  604010 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-796924' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-796924/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-796924' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:51:52.595320  604010 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:51:52.595345  604010 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 11:51:52.595378  604010 ubuntu.go:190] setting up certificates
	I1213 11:51:52.595395  604010 provision.go:84] configureAuth start
	I1213 11:51:52.595456  604010 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:51:52.612730  604010 provision.go:143] copyHostCerts
	I1213 11:51:52.612805  604010 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 11:51:52.612815  604010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 11:51:52.612893  604010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 11:51:52.612991  604010 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 11:51:52.612997  604010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 11:51:52.613022  604010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 11:51:52.613072  604010 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 11:51:52.613077  604010 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 11:51:52.613099  604010 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 11:51:52.613145  604010 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.newest-cni-796924 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-796924]
	I1213 11:51:52.732846  604010 provision.go:177] copyRemoteCerts
	I1213 11:51:52.732930  604010 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:51:52.732973  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:52.750653  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:52.855439  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:51:52.874016  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 11:51:52.892129  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 11:51:52.911103  604010 provision.go:87] duration metric: took 315.684656ms to configureAuth
	I1213 11:51:52.911132  604010 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:51:52.911332  604010 config.go:182] Loaded profile config "newest-cni-796924": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:51:52.911340  604010 machine.go:97] duration metric: took 3.875289031s to provisionDockerMachine
	I1213 11:51:52.911347  604010 start.go:293] postStartSetup for "newest-cni-796924" (driver="docker")
	I1213 11:51:52.911359  604010 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:51:52.911407  604010 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:51:52.911460  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:52.929094  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:53.034971  604010 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:51:53.038558  604010 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:51:53.038590  604010 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:51:53.038602  604010 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 11:51:53.038659  604010 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 11:51:53.038763  604010 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 11:51:53.038874  604010 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:51:53.046532  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:51:53.064751  604010 start.go:296] duration metric: took 153.388066ms for postStartSetup
	I1213 11:51:53.064850  604010 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:51:53.064897  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:53.083055  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:53.186537  604010 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:51:53.194814  604010 fix.go:56] duration metric: took 4.483190974s for fixHost
	I1213 11:51:53.194902  604010 start.go:83] releasing machines lock for "newest-cni-796924", held for 4.483304896s
	I1213 11:51:53.195014  604010 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-796924
	I1213 11:51:53.218858  604010 ssh_runner.go:195] Run: cat /version.json
	I1213 11:51:53.218911  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:53.219425  604010 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:51:53.219496  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:53.245887  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:53.248082  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:53.440734  604010 ssh_runner.go:195] Run: systemctl --version
	I1213 11:51:53.447618  604010 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:51:53.452306  604010 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:51:53.452441  604010 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:51:53.460789  604010 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 11:51:53.460813  604010 start.go:496] detecting cgroup driver to use...
	I1213 11:51:53.460876  604010 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:51:53.460961  604010 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 11:51:53.478830  604010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 11:51:53.493048  604010 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:51:53.493110  604010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:51:53.509243  604010 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:51:53.522928  604010 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:51:53.639237  604010 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:51:53.752852  604010 docker.go:234] disabling docker service ...
	I1213 11:51:53.752960  604010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:51:53.768708  604010 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:51:53.782124  604010 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:51:53.903168  604010 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:51:54.054509  604010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:51:54.067985  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:51:54.083550  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 11:51:54.093447  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 11:51:54.102944  604010 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 11:51:54.103048  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 11:51:54.112424  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:51:54.121802  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 11:51:54.130945  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 11:51:54.140080  604010 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:51:54.148567  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 11:51:54.157935  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 11:51:54.167456  604010 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 11:51:54.176969  604010 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:51:54.184730  604010 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:51:54.192410  604010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:51:54.297614  604010 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 11:51:54.415943  604010 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 11:51:54.416062  604010 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 11:51:54.419918  604010 start.go:564] Will wait 60s for crictl version
	I1213 11:51:54.420004  604010 ssh_runner.go:195] Run: which crictl
	I1213 11:51:54.424003  604010 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:51:54.449039  604010 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 11:51:54.449144  604010 ssh_runner.go:195] Run: containerd --version
	I1213 11:51:54.473383  604010 ssh_runner.go:195] Run: containerd --version
	I1213 11:51:54.499419  604010 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 11:51:54.502369  604010 cli_runner.go:164] Run: docker network inspect newest-cni-796924 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:51:54.518648  604010 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 11:51:54.522791  604010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:51:54.535931  604010 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 11:51:54.538956  604010 kubeadm.go:884] updating cluster {Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:51:54.539121  604010 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 11:51:54.539232  604010 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:51:54.563801  604010 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 11:51:54.563827  604010 containerd.go:534] Images already preloaded, skipping extraction
	I1213 11:51:54.563893  604010 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:51:54.592245  604010 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 11:51:54.592267  604010 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:51:54.592274  604010 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 11:51:54.592392  604010 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-796924 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:51:54.592461  604010 ssh_runner.go:195] Run: sudo crictl info
	I1213 11:51:54.621799  604010 cni.go:84] Creating CNI manager for ""
	I1213 11:51:54.621822  604010 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 11:51:54.621841  604010 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 11:51:54.621863  604010 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-796924 NodeName:newest-cni-796924 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:51:54.621977  604010 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-796924"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:51:54.622049  604010 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:51:54.629798  604010 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:51:54.629892  604010 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:51:54.637447  604010 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 11:51:54.650384  604010 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:51:54.666817  604010 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 11:51:54.689998  604010 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:51:54.695776  604010 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:51:54.710482  604010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:51:54.832824  604010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:51:54.850492  604010 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924 for IP: 192.168.76.2
	I1213 11:51:54.850566  604010 certs.go:195] generating shared ca certs ...
	I1213 11:51:54.850597  604010 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:54.850790  604010 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 11:51:54.850872  604010 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 11:51:54.850895  604010 certs.go:257] generating profile certs ...
	I1213 11:51:54.851026  604010 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/client.key
	I1213 11:51:54.851129  604010 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key.ced45374
	I1213 11:51:54.851211  604010 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key
	I1213 11:51:54.851379  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 11:51:54.851441  604010 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 11:51:54.851467  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:51:54.851513  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:51:54.851568  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:51:54.851620  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 11:51:54.851698  604010 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 11:51:54.852295  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:51:54.879994  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:51:54.900131  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:51:54.919515  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:51:54.939840  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 11:51:54.959348  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:51:54.977529  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:51:54.995648  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/newest-cni-796924/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:51:55.023031  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 11:51:55.043814  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:51:55.063273  604010 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 11:51:55.083198  604010 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:51:55.097732  604010 ssh_runner.go:195] Run: openssl version
	I1213 11:51:55.104458  604010 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 11:51:55.112443  604010 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 11:51:55.120212  604010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 11:51:55.124175  604010 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 11:51:55.124296  604010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 11:51:55.166612  604010 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:51:55.174931  604010 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:55.182763  604010 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:51:55.190655  604010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:55.194550  604010 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:55.194637  604010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:55.235820  604010 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:51:55.243647  604010 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 11:51:55.251252  604010 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 11:51:55.258979  604010 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 11:51:55.263040  604010 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 11:51:55.263115  604010 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 11:51:55.305815  604010 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:51:55.313358  604010 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:51:55.317228  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:51:55.358360  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:51:55.399354  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:51:55.440616  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:51:55.481788  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:51:55.527783  604010 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:51:55.570548  604010 kubeadm.go:401] StartCluster: {Name:newest-cni-796924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-796924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:51:55.570648  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 11:51:55.570740  604010 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:51:55.597807  604010 cri.go:89] found id: ""
	I1213 11:51:55.597910  604010 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:51:55.605830  604010 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 11:51:55.605851  604010 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 11:51:55.605907  604010 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 11:51:55.613526  604010 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:51:55.614085  604010 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-796924" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:51:55.614332  604010 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-307042/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-796924" cluster setting kubeconfig missing "newest-cni-796924" context setting]
	I1213 11:51:55.614935  604010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:55.617326  604010 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 11:51:55.625376  604010 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1213 11:51:55.625455  604010 kubeadm.go:602] duration metric: took 19.59756ms to restartPrimaryControlPlane
	I1213 11:51:55.625473  604010 kubeadm.go:403] duration metric: took 54.935084ms to StartCluster
	I1213 11:51:55.625491  604010 settings.go:142] acquiring lock: {Name:mk079e9a25ebbc2c8fbae42d4c6ed096a652c00b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:55.625565  604010 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:51:55.626520  604010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:55.626793  604010 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 11:51:55.627185  604010 config.go:182] Loaded profile config "newest-cni-796924": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:51:55.627271  604010 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 11:51:55.627363  604010 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-796924"
	I1213 11:51:55.627383  604010 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-796924"
	I1213 11:51:55.627413  604010 host.go:66] Checking if "newest-cni-796924" exists ...
	I1213 11:51:55.627434  604010 addons.go:70] Setting dashboard=true in profile "newest-cni-796924"
	I1213 11:51:55.627450  604010 addons.go:239] Setting addon dashboard=true in "newest-cni-796924"
	W1213 11:51:55.627456  604010 addons.go:248] addon dashboard should already be in state true
	I1213 11:51:55.627477  604010 host.go:66] Checking if "newest-cni-796924" exists ...
	I1213 11:51:55.627878  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:55.628091  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:55.628783  604010 addons.go:70] Setting default-storageclass=true in profile "newest-cni-796924"
	I1213 11:51:55.628812  604010 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-796924"
	I1213 11:51:55.629112  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:55.631079  604010 out.go:179] * Verifying Kubernetes components...
	I1213 11:51:55.634139  604010 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:51:55.667375  604010 addons.go:239] Setting addon default-storageclass=true in "newest-cni-796924"
	I1213 11:51:55.667423  604010 host.go:66] Checking if "newest-cni-796924" exists ...
	I1213 11:51:55.667842  604010 cli_runner.go:164] Run: docker container inspect newest-cni-796924 --format={{.State.Status}}
	I1213 11:51:55.688084  604010 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:51:55.691677  604010 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:51:55.691701  604010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 11:51:55.691785  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:55.697906  604010 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 11:51:55.697933  604010 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 11:51:55.698005  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:55.704903  604010 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 11:51:55.707765  604010 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1213 11:51:53.170873  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:55.171466  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:51:57.171707  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:51:55.710658  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 11:51:55.710701  604010 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 11:51:55.710771  604010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-796924
	I1213 11:51:55.754330  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:55.772597  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:55.773144  604010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/newest-cni-796924/id_rsa Username:docker}
	I1213 11:51:55.866635  604010 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:51:55.926205  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:51:55.934055  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:51:55.957399  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 11:51:55.957444  604010 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 11:51:55.971225  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 11:51:55.971291  604010 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 11:51:56.007402  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 11:51:56.007444  604010 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 11:51:56.023097  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 11:51:56.023122  604010 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 11:51:56.039306  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 11:51:56.039347  604010 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 11:51:56.054865  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 11:51:56.054892  604010 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 11:51:56.069056  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 11:51:56.069097  604010 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 11:51:56.083856  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 11:51:56.083885  604010 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 11:51:56.097577  604010 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:51:56.097600  604010 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 11:51:56.111351  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:51:56.663977  604010 api_server.go:52] waiting for apiserver process to appear ...
	W1213 11:51:56.664058  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.664121  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:51:56.664172  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.664188  604010 retry.go:31] will retry after 289.236479ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.664122  604010 retry.go:31] will retry after 183.877549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:51:56.664453  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.664469  604010 retry.go:31] will retry after 218.899341ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.849187  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:51:56.883801  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:51:56.926668  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.926802  604010 retry.go:31] will retry after 241.089101ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.953849  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:51:56.985603  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:56.985688  604010 retry.go:31] will retry after 237.809149ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:51:57.026263  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.026297  604010 retry.go:31] will retry after 349.427803ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.164593  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:51:57.169067  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:51:57.224678  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:51:57.234523  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.234624  604010 retry.go:31] will retry after 787.051236ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:51:57.297371  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.297440  604010 retry.go:31] will retry after 317.469921ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.376456  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:51:57.452615  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.452649  604010 retry.go:31] will retry after 679.978714ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.616149  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:51:57.664727  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:51:57.701776  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:57.701820  604010 retry.go:31] will retry after 682.458958ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.022897  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:51:58.088105  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.088141  604010 retry.go:31] will retry after 475.463602ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.133516  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:51:58.165032  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:51:58.230626  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.230659  604010 retry.go:31] will retry after 634.421741ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.385149  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:51:58.461368  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.461471  604010 retry.go:31] will retry after 859.118132ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:51:59.671078  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:02.171305  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:51:58.564227  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:51:58.633858  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.633891  604010 retry.go:31] will retry after 1.632863719s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.665061  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:51:58.866071  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:51:58.936827  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:58.936859  604010 retry.go:31] will retry after 1.533813591s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:59.165263  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:51:59.321822  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:51:59.385607  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:59.385640  604010 retry.go:31] will retry after 2.101781304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:51:59.665231  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:00.164312  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:00.267962  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:52:00.471799  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:52:00.516223  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:00.516306  604010 retry.go:31] will retry after 1.542990826s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:52:00.569718  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:00.569762  604010 retry.go:31] will retry after 1.699392085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:00.664868  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:01.165071  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:01.487701  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:01.556576  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:01.556610  604010 retry.go:31] will retry after 1.79578881s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:01.665032  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:02.059588  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:02.123368  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:02.123421  604010 retry.go:31] will retry after 4.212258745s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:02.164643  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:02.270065  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:52:02.336655  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:02.336687  604010 retry.go:31] will retry after 2.291652574s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:02.665180  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:03.164491  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:03.353076  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:03.415819  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:03.415855  604010 retry.go:31] will retry after 3.520621119s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:52:04.171660  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:06.671628  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:03.664666  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:04.164990  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:04.629361  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:52:04.665164  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:04.695856  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:04.695887  604010 retry.go:31] will retry after 5.092647079s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:05.164583  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:05.665005  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:06.164298  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:06.336728  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:06.399256  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:06.399289  604010 retry.go:31] will retry after 2.548236052s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:06.664733  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:06.937128  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:07.007320  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:07.007359  604010 retry.go:31] will retry after 3.279734506s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:07.164482  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:07.664186  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:08.164259  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:09.170863  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:11.170983  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:08.664905  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:08.947682  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:09.039225  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:09.039255  604010 retry.go:31] will retry after 6.163469341s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:09.164651  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:09.664239  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:09.789499  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:52:09.850576  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:09.850610  604010 retry.go:31] will retry after 3.796434626s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:10.165090  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:10.288047  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:10.355227  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:10.355265  604010 retry.go:31] will retry after 7.010948619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:10.664471  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:11.165062  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:11.664272  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:12.164932  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:12.664657  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:13.164305  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:13.670824  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:15.671074  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:13.647328  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:52:13.664818  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:13.719910  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:13.719942  604010 retry.go:31] will retry after 9.330768854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:14.164344  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:14.664306  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:15.164242  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:15.203030  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:15.263577  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:15.263607  604010 retry.go:31] will retry after 8.190073233s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:15.664266  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:16.165207  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:16.664293  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:17.164467  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:17.367027  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:17.430899  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:17.430934  604010 retry.go:31] will retry after 13.887712507s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:17.664357  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:18.164881  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:18.170945  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:20.670832  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:18.664960  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:19.164308  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:19.665208  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:20.165105  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:20.664287  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:21.164362  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:21.664274  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:22.164288  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:22.665206  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:23.051577  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:52:23.111902  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:23.111935  604010 retry.go:31] will retry after 11.527342508s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:23.165176  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:23.453917  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:23.170872  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:25.171346  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:27.171433  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:23.521291  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:23.521324  604010 retry.go:31] will retry after 14.842315117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:23.664722  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:24.165113  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:24.664242  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:25.164277  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:25.664353  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:26.164245  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:26.664280  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:27.164344  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:27.664260  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:28.164294  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:29.670795  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:31.671822  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:28.664213  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:29.165160  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:29.664269  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:30.165128  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:30.664169  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:31.164314  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:31.319227  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:31.384220  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:31.384257  604010 retry.go:31] will retry after 14.168397615s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:31.664303  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:32.164990  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:32.664299  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:33.164301  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:34.171181  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:36.670803  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:33.664641  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:34.164270  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:34.639887  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:52:34.664451  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:34.713642  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:34.713678  604010 retry.go:31] will retry after 21.545330114s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:35.164160  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:35.665036  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:36.164253  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:36.664233  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:37.164426  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:37.664423  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:38.164585  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:38.364338  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:38.426452  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:38.426486  604010 retry.go:31] will retry after 16.958085374s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:52:38.670951  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:41.170820  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:38.665187  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:39.164590  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:39.665128  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:40.164295  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:40.664289  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:41.164238  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:41.664308  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:42.164562  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:42.664974  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:43.164327  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:43.170883  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:45.172031  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:47.670782  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:43.664236  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:44.164970  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:44.664271  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:45.164423  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:45.553023  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:52:45.614931  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:45.614965  604010 retry.go:31] will retry after 19.954026213s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:45.665141  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:46.164288  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:46.664717  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:47.164232  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:47.664844  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:48.164283  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:50.171769  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 11:52:52.671828  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:48.665063  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:49.164283  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:49.664430  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:50.165168  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:50.665085  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:51.164301  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:51.664309  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:52.165148  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:52.664704  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:53.164339  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 11:52:55.170984  596998 node_ready.go:55] error getting node "no-preload-333352" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-333352": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 11:52:56.171498  596998 node_ready.go:38] duration metric: took 6m0.001140759s for node "no-preload-333352" to be "Ready" ...
	I1213 11:52:56.174587  596998 out.go:203] 
	W1213 11:52:56.177556  596998 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 11:52:56.177585  596998 out.go:285] * 
	W1213 11:52:56.179740  596998 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:52:56.182759  596998 out.go:203] 
	I1213 11:52:53.664699  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:54.164840  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:54.664218  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:55.165093  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:55.385630  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:52:55.504689  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:55.504722  604010 retry.go:31] will retry after 37.277266145s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:55.664229  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:52:55.664327  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:52:55.694796  604010 cri.go:89] found id: ""
	I1213 11:52:55.694825  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.694835  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:52:55.694843  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:52:55.694903  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:52:55.723663  604010 cri.go:89] found id: ""
	I1213 11:52:55.723688  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.723697  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:52:55.723704  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:52:55.723763  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:52:55.748991  604010 cri.go:89] found id: ""
	I1213 11:52:55.749019  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.749027  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:52:55.749034  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:52:55.749096  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:52:55.774258  604010 cri.go:89] found id: ""
	I1213 11:52:55.774281  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.774290  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:52:55.774297  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:52:55.774355  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:52:55.798762  604010 cri.go:89] found id: ""
	I1213 11:52:55.798788  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.798796  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:52:55.798802  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:52:55.798861  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:52:55.823037  604010 cri.go:89] found id: ""
	I1213 11:52:55.823063  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.823071  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:52:55.823078  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:52:55.823139  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:52:55.847241  604010 cri.go:89] found id: ""
	I1213 11:52:55.847267  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.847276  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:52:55.847283  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:52:55.847343  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:52:55.872394  604010 cri.go:89] found id: ""
	I1213 11:52:55.872464  604010 logs.go:282] 0 containers: []
	W1213 11:52:55.872488  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:52:55.872505  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:52:55.872518  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:52:55.888592  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:52:55.888623  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:52:55.954582  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:52:55.945990    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.946863    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.948347    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.948763    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.950227    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:52:55.945990    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.946863    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.948347    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.948763    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:55.950227    1847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:52:55.954616  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:52:55.954629  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:52:55.979360  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:52:55.979393  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:52:56.015953  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:52:56.015986  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:52:56.262345  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:52:56.407172  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:56.407203  604010 retry.go:31] will retry after 30.096993011s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:52:58.574217  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:52:58.585863  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:52:58.585937  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:52:58.613052  604010 cri.go:89] found id: ""
	I1213 11:52:58.613084  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.613094  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:52:58.613102  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:52:58.613187  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:52:58.639217  604010 cri.go:89] found id: ""
	I1213 11:52:58.639241  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.639250  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:52:58.639256  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:52:58.639323  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:52:58.691503  604010 cri.go:89] found id: ""
	I1213 11:52:58.691529  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.691539  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:52:58.691545  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:52:58.691607  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:52:58.739302  604010 cri.go:89] found id: ""
	I1213 11:52:58.739330  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.739339  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:52:58.739345  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:52:58.739407  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:52:58.768957  604010 cri.go:89] found id: ""
	I1213 11:52:58.768985  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.768994  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:52:58.769001  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:52:58.769114  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:52:58.794144  604010 cri.go:89] found id: ""
	I1213 11:52:58.794172  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.794181  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:52:58.794188  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:52:58.794248  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:52:58.818208  604010 cri.go:89] found id: ""
	I1213 11:52:58.818234  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.818243  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:52:58.818250  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:52:58.818307  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:52:58.841575  604010 cri.go:89] found id: ""
	I1213 11:52:58.841600  604010 logs.go:282] 0 containers: []
	W1213 11:52:58.841613  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:52:58.841622  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:52:58.841636  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:52:58.867434  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:52:58.867469  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:52:58.898944  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:52:58.898974  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:52:58.954613  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:52:58.954649  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:52:58.970766  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:52:58.970842  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:52:59.034290  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:52:59.026403    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.026973    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.028473    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.028883    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.030363    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:52:59.026403    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.026973    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.028473    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.028883    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:52:59.030363    1983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:01.534586  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:01.545484  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:01.545555  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:01.572215  604010 cri.go:89] found id: ""
	I1213 11:53:01.572288  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.572302  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:01.572310  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:01.572388  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:01.598159  604010 cri.go:89] found id: ""
	I1213 11:53:01.598188  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.598196  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:01.598203  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:01.598300  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:01.623153  604010 cri.go:89] found id: ""
	I1213 11:53:01.623177  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.623186  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:01.623195  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:01.623261  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:01.649622  604010 cri.go:89] found id: ""
	I1213 11:53:01.649644  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.649652  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:01.649659  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:01.649737  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:01.683094  604010 cri.go:89] found id: ""
	I1213 11:53:01.683119  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.683127  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:01.683133  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:01.683194  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:01.713141  604010 cri.go:89] found id: ""
	I1213 11:53:01.713209  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.713236  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:01.713255  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:01.713329  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:01.743530  604010 cri.go:89] found id: ""
	I1213 11:53:01.743598  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.743644  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:01.743659  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:01.743724  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:01.768540  604010 cri.go:89] found id: ""
	I1213 11:53:01.768567  604010 logs.go:282] 0 containers: []
	W1213 11:53:01.768575  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:01.768585  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:01.768596  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:01.793626  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:01.793664  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:01.820553  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:01.820583  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:01.876734  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:01.876770  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:01.893351  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:01.893425  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:01.982105  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:01.970876    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.971602    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.973230    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.973588    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.977591    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:01.970876    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.971602    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.973230    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.973588    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:01.977591    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:04.482731  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:04.495226  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:04.495299  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:04.521792  604010 cri.go:89] found id: ""
	I1213 11:53:04.521819  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.521829  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:04.521836  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:04.521900  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:04.553223  604010 cri.go:89] found id: ""
	I1213 11:53:04.553249  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.553258  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:04.553264  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:04.553333  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:04.580024  604010 cri.go:89] found id: ""
	I1213 11:53:04.580049  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.580058  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:04.580064  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:04.580123  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:04.622013  604010 cri.go:89] found id: ""
	I1213 11:53:04.622041  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.622050  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:04.622057  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:04.622117  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:04.646212  604010 cri.go:89] found id: ""
	I1213 11:53:04.646236  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.646245  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:04.646251  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:04.646312  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:04.682129  604010 cri.go:89] found id: ""
	I1213 11:53:04.682156  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.682165  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:04.682171  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:04.682288  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:04.710645  604010 cri.go:89] found id: ""
	I1213 11:53:04.710675  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.710706  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:04.710714  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:04.710781  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:04.742882  604010 cri.go:89] found id: ""
	I1213 11:53:04.742906  604010 logs.go:282] 0 containers: []
	W1213 11:53:04.742915  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:04.742926  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:04.742938  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:04.799010  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:04.799046  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:04.814626  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:04.814655  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:04.884663  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:04.876082    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.876754    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.878443    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.878819    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.880048    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:04.876082    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.876754    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.878443    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.878819    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:04.880048    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:04.884686  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:04.884717  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:04.910422  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:04.910589  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:05.570211  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:53:05.631760  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:53:05.631794  604010 retry.go:31] will retry after 44.542402529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 11:53:07.442499  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:07.453537  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:07.453615  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:07.482132  604010 cri.go:89] found id: ""
	I1213 11:53:07.482155  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.482163  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:07.482170  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:07.482229  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:07.506787  604010 cri.go:89] found id: ""
	I1213 11:53:07.506813  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.506823  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:07.506829  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:07.506890  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:07.532425  604010 cri.go:89] found id: ""
	I1213 11:53:07.532449  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.532458  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:07.532465  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:07.532527  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:07.557042  604010 cri.go:89] found id: ""
	I1213 11:53:07.557071  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.557081  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:07.557087  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:07.557147  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:07.581888  604010 cri.go:89] found id: ""
	I1213 11:53:07.581919  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.581934  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:07.581940  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:07.582000  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:07.605619  604010 cri.go:89] found id: ""
	I1213 11:53:07.605646  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.605655  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:07.605661  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:07.605722  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:07.631481  604010 cri.go:89] found id: ""
	I1213 11:53:07.631503  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.631511  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:07.631517  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:07.631574  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:07.656152  604010 cri.go:89] found id: ""
	I1213 11:53:07.656178  604010 logs.go:282] 0 containers: []
	W1213 11:53:07.656187  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:07.656196  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:07.656207  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:07.738199  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:07.729773    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.730173    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.732061    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.732672    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.734342    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:07.729773    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.730173    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.732061    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.732672    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:07.734342    2303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:07.738218  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:07.738230  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:07.763561  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:07.763597  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:07.791032  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:07.791059  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:07.846125  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:07.846160  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:10.362523  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:10.372985  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:10.373056  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:10.397984  604010 cri.go:89] found id: ""
	I1213 11:53:10.398016  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.398037  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:10.398044  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:10.398121  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:10.423159  604010 cri.go:89] found id: ""
	I1213 11:53:10.423189  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.423198  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:10.423204  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:10.423266  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:10.447027  604010 cri.go:89] found id: ""
	I1213 11:53:10.447055  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.447064  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:10.447071  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:10.447131  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:10.472026  604010 cri.go:89] found id: ""
	I1213 11:53:10.472049  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.472057  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:10.472064  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:10.472122  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:10.503263  604010 cri.go:89] found id: ""
	I1213 11:53:10.503326  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.503352  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:10.503366  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:10.503440  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:10.532481  604010 cri.go:89] found id: ""
	I1213 11:53:10.532509  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.532518  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:10.532524  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:10.532587  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:10.557219  604010 cri.go:89] found id: ""
	I1213 11:53:10.557258  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.557266  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:10.557273  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:10.557342  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:10.585410  604010 cri.go:89] found id: ""
	I1213 11:53:10.585499  604010 logs.go:282] 0 containers: []
	W1213 11:53:10.585522  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:10.585547  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:10.585587  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:10.611450  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:10.611488  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:10.639926  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:10.639954  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:10.696844  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:10.696881  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:10.713623  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:10.713657  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:10.777642  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:10.768681    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.769607    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.771307    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.771820    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.773703    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:10.768681    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.769607    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.771307    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.771820    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:10.773703    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:13.278890  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:13.289748  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:13.289817  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:13.317511  604010 cri.go:89] found id: ""
	I1213 11:53:13.317541  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.317550  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:13.317557  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:13.317618  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:13.343404  604010 cri.go:89] found id: ""
	I1213 11:53:13.343432  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.343441  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:13.343448  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:13.343503  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:13.369007  604010 cri.go:89] found id: ""
	I1213 11:53:13.369030  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.369039  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:13.369046  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:13.369108  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:13.395054  604010 cri.go:89] found id: ""
	I1213 11:53:13.395084  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.395094  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:13.395109  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:13.395171  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:13.424003  604010 cri.go:89] found id: ""
	I1213 11:53:13.424030  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.424039  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:13.424046  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:13.424105  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:13.448932  604010 cri.go:89] found id: ""
	I1213 11:53:13.449012  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.449029  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:13.449036  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:13.449112  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:13.474446  604010 cri.go:89] found id: ""
	I1213 11:53:13.474472  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.474481  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:13.474487  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:13.474611  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:13.501117  604010 cri.go:89] found id: ""
	I1213 11:53:13.501141  604010 logs.go:282] 0 containers: []
	W1213 11:53:13.501150  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:13.501159  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:13.501171  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:13.557792  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:13.557829  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:13.574541  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:13.574574  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:13.639676  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:13.629891    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.631830    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.632611    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.634220    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.634886    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:13.629891    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.631830    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.632611    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.634220    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:13.634886    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:13.639700  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:13.639713  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:13.664830  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:13.664911  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:16.204971  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:16.215560  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:16.215635  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:16.240196  604010 cri.go:89] found id: ""
	I1213 11:53:16.240220  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.240229  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:16.240235  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:16.240293  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:16.265455  604010 cri.go:89] found id: ""
	I1213 11:53:16.265487  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.265497  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:16.265504  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:16.265562  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:16.289852  604010 cri.go:89] found id: ""
	I1213 11:53:16.289875  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.289886  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:16.289893  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:16.289954  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:16.315329  604010 cri.go:89] found id: ""
	I1213 11:53:16.315353  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.315362  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:16.315368  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:16.315433  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:16.346811  604010 cri.go:89] found id: ""
	I1213 11:53:16.346835  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.346844  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:16.346856  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:16.346916  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:16.371580  604010 cri.go:89] found id: ""
	I1213 11:53:16.371608  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.371617  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:16.371623  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:16.371759  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:16.397183  604010 cri.go:89] found id: ""
	I1213 11:53:16.397210  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.397219  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:16.397225  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:16.397286  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:16.422782  604010 cri.go:89] found id: ""
	I1213 11:53:16.422810  604010 logs.go:282] 0 containers: []
	W1213 11:53:16.422821  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:16.422831  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:16.422848  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:16.478667  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:16.478714  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:16.494974  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:16.495011  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:16.560810  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:16.552790    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.553168    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.554711    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.555221    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.556703    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:16.552790    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.553168    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.554711    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.555221    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:16.556703    2649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:16.560835  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:16.560849  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:16.586263  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:16.586301  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:19.117851  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:19.128831  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:19.128899  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:19.156507  604010 cri.go:89] found id: ""
	I1213 11:53:19.156537  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.156546  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:19.156553  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:19.156619  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:19.184004  604010 cri.go:89] found id: ""
	I1213 11:53:19.184032  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.184041  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:19.184048  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:19.184108  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:19.210447  604010 cri.go:89] found id: ""
	I1213 11:53:19.210475  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.210485  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:19.210491  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:19.210563  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:19.243214  604010 cri.go:89] found id: ""
	I1213 11:53:19.243241  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.243250  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:19.243257  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:19.243317  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:19.267811  604010 cri.go:89] found id: ""
	I1213 11:53:19.267835  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.267845  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:19.267851  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:19.267912  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:19.291841  604010 cri.go:89] found id: ""
	I1213 11:53:19.291863  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.291872  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:19.291878  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:19.291942  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:19.316863  604010 cri.go:89] found id: ""
	I1213 11:53:19.316890  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.316898  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:19.316904  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:19.316963  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:19.341844  604010 cri.go:89] found id: ""
	I1213 11:53:19.341872  604010 logs.go:282] 0 containers: []
	W1213 11:53:19.341881  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:19.341890  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:19.341901  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:19.397829  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:19.397868  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:19.413720  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:19.413749  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:19.481667  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:19.473280    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.474094    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.475625    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.476130    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.477751    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:19.473280    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.474094    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.475625    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.476130    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:19.477751    2765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:19.481694  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:19.481706  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:19.507029  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:19.507069  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:22.036187  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:22.047443  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:22.047516  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:22.073399  604010 cri.go:89] found id: ""
	I1213 11:53:22.073425  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.073433  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:22.073440  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:22.073519  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:22.102458  604010 cri.go:89] found id: ""
	I1213 11:53:22.102483  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.102492  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:22.102499  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:22.102564  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:22.127170  604010 cri.go:89] found id: ""
	I1213 11:53:22.127195  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.127203  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:22.127210  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:22.127270  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:22.152852  604010 cri.go:89] found id: ""
	I1213 11:53:22.152879  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.152887  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:22.152894  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:22.152972  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:22.194915  604010 cri.go:89] found id: ""
	I1213 11:53:22.194939  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.194947  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:22.194985  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:22.195074  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:22.228469  604010 cri.go:89] found id: ""
	I1213 11:53:22.228497  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.228507  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:22.228514  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:22.228574  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:22.257833  604010 cri.go:89] found id: ""
	I1213 11:53:22.257908  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.257931  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:22.257949  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:22.258038  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:22.283351  604010 cri.go:89] found id: ""
	I1213 11:53:22.283375  604010 logs.go:282] 0 containers: []
	W1213 11:53:22.283385  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:22.283394  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:22.283425  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:22.339722  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:22.339759  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:22.358616  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:22.358649  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:22.425578  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:22.417365    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.418082    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.419768    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.420247    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.421786    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:22.417365    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.418082    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.419768    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.420247    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:22.421786    2881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:22.425645  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:22.425665  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:22.450867  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:22.450905  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:24.977642  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:24.988556  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:24.988625  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:25.016189  604010 cri.go:89] found id: ""
	I1213 11:53:25.016224  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.016247  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:25.016255  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:25.016320  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:25.044535  604010 cri.go:89] found id: ""
	I1213 11:53:25.044558  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.044567  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:25.044573  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:25.044632  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:25.070715  604010 cri.go:89] found id: ""
	I1213 11:53:25.070743  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.070752  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:25.070759  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:25.070822  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:25.096936  604010 cri.go:89] found id: ""
	I1213 11:53:25.096959  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.096967  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:25.096974  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:25.097035  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:25.122437  604010 cri.go:89] found id: ""
	I1213 11:53:25.122470  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.122480  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:25.122486  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:25.122584  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:25.148962  604010 cri.go:89] found id: ""
	I1213 11:53:25.148988  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.148997  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:25.149003  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:25.149074  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:25.181633  604010 cri.go:89] found id: ""
	I1213 11:53:25.181655  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.181664  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:25.181670  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:25.181732  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:25.212760  604010 cri.go:89] found id: ""
	I1213 11:53:25.212782  604010 logs.go:282] 0 containers: []
	W1213 11:53:25.212790  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:25.212799  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:25.212811  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:25.276581  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:25.268697    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.269118    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.270651    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.271026    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.272496    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:25.268697    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.269118    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.270651    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.271026    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:25.272496    2988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:25.276603  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:25.276616  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:25.302726  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:25.302763  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:25.334110  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:25.334183  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:25.390064  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:25.390100  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:26.504848  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 11:53:26.566930  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:53:26.567035  604010 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 11:53:27.907342  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:27.919244  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:27.919322  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:27.953618  604010 cri.go:89] found id: ""
	I1213 11:53:27.953646  604010 logs.go:282] 0 containers: []
	W1213 11:53:27.953656  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:27.953662  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:27.953732  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:27.983451  604010 cri.go:89] found id: ""
	I1213 11:53:27.983474  604010 logs.go:282] 0 containers: []
	W1213 11:53:27.983483  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:27.983494  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:27.983553  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:28.015089  604010 cri.go:89] found id: ""
	I1213 11:53:28.015124  604010 logs.go:282] 0 containers: []
	W1213 11:53:28.015133  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:28.015141  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:28.015206  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:28.040741  604010 cri.go:89] found id: ""
	I1213 11:53:28.040764  604010 logs.go:282] 0 containers: []
	W1213 11:53:28.040773  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:28.040780  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:28.040847  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:28.066994  604010 cri.go:89] found id: ""
	I1213 11:53:28.067023  604010 logs.go:282] 0 containers: []
	W1213 11:53:28.067032  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:28.067039  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:28.067100  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:28.096788  604010 cri.go:89] found id: ""
	I1213 11:53:28.096819  604010 logs.go:282] 0 containers: []
	W1213 11:53:28.096828  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:28.096835  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:28.096896  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:28.124766  604010 cri.go:89] found id: ""
	I1213 11:53:28.124789  604010 logs.go:282] 0 containers: []
	W1213 11:53:28.124798  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:28.124804  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:28.124873  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:28.159549  604010 cri.go:89] found id: ""
	I1213 11:53:28.159577  604010 logs.go:282] 0 containers: []
	W1213 11:53:28.159585  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:28.159594  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:28.159606  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:28.199573  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:28.199603  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:28.270740  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:28.270789  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:28.287502  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:28.287532  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:28.351364  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:28.343352    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.343924    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.345385    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.345783    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.347266    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:28.343352    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.343924    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.345385    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.345783    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:28.347266    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:28.351388  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:28.351401  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:30.876922  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:30.887774  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:30.887849  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:30.923850  604010 cri.go:89] found id: ""
	I1213 11:53:30.923878  604010 logs.go:282] 0 containers: []
	W1213 11:53:30.923887  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:30.923893  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:30.923952  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:30.951470  604010 cri.go:89] found id: ""
	I1213 11:53:30.951498  604010 logs.go:282] 0 containers: []
	W1213 11:53:30.951507  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:30.951513  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:30.951570  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:30.984618  604010 cri.go:89] found id: ""
	I1213 11:53:30.984644  604010 logs.go:282] 0 containers: []
	W1213 11:53:30.984653  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:30.984659  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:30.984718  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:31.013958  604010 cri.go:89] found id: ""
	I1213 11:53:31.013986  604010 logs.go:282] 0 containers: []
	W1213 11:53:31.013994  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:31.014001  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:31.014062  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:31.039624  604010 cri.go:89] found id: ""
	I1213 11:53:31.039651  604010 logs.go:282] 0 containers: []
	W1213 11:53:31.039661  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:31.039668  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:31.039735  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:31.065442  604010 cri.go:89] found id: ""
	I1213 11:53:31.065471  604010 logs.go:282] 0 containers: []
	W1213 11:53:31.065480  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:31.065526  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:31.065591  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:31.093987  604010 cri.go:89] found id: ""
	I1213 11:53:31.094012  604010 logs.go:282] 0 containers: []
	W1213 11:53:31.094022  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:31.094028  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:31.094092  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:31.120512  604010 cri.go:89] found id: ""
	I1213 11:53:31.120536  604010 logs.go:282] 0 containers: []
	W1213 11:53:31.120545  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:31.120555  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:31.120568  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:31.193061  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:31.184276    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.185271    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.187099    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.187409    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.188923    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:31.184276    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.185271    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.187099    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.187409    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:31.188923    3220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:31.193086  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:31.193099  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:31.222013  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:31.222046  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:31.251352  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:31.251380  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:31.307515  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:31.307558  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:32.782865  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 11:53:32.843769  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:53:32.843886  604010 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 11:53:33.825081  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:33.836405  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:33.836483  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:33.862074  604010 cri.go:89] found id: ""
	I1213 11:53:33.862097  604010 logs.go:282] 0 containers: []
	W1213 11:53:33.862108  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:33.862114  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:33.862174  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:33.887847  604010 cri.go:89] found id: ""
	I1213 11:53:33.887872  604010 logs.go:282] 0 containers: []
	W1213 11:53:33.887881  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:33.887888  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:33.887953  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:33.922816  604010 cri.go:89] found id: ""
	I1213 11:53:33.922839  604010 logs.go:282] 0 containers: []
	W1213 11:53:33.922847  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:33.922854  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:33.922912  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:33.956255  604010 cri.go:89] found id: ""
	I1213 11:53:33.956278  604010 logs.go:282] 0 containers: []
	W1213 11:53:33.956286  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:33.956296  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:33.956357  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:33.988633  604010 cri.go:89] found id: ""
	I1213 11:53:33.988660  604010 logs.go:282] 0 containers: []
	W1213 11:53:33.988668  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:33.988675  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:33.988734  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:34.016574  604010 cri.go:89] found id: ""
	I1213 11:53:34.016600  604010 logs.go:282] 0 containers: []
	W1213 11:53:34.016610  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:34.016618  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:34.016688  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:34.047246  604010 cri.go:89] found id: ""
	I1213 11:53:34.047274  604010 logs.go:282] 0 containers: []
	W1213 11:53:34.047283  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:34.047290  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:34.047351  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:34.073767  604010 cri.go:89] found id: ""
	I1213 11:53:34.073791  604010 logs.go:282] 0 containers: []
	W1213 11:53:34.073801  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:34.073810  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:34.073821  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:34.142086  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:34.142126  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:34.160135  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:34.160221  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:34.242780  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:34.234520    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.235116    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.236649    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.237063    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.238589    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:34.234520    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.235116    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.236649    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.237063    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:34.238589    3342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:34.242803  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:34.242817  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:34.268944  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:34.268981  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:36.800525  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:36.813555  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:36.813631  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:36.838503  604010 cri.go:89] found id: ""
	I1213 11:53:36.838530  604010 logs.go:282] 0 containers: []
	W1213 11:53:36.838539  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:36.838546  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:36.838610  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:36.863532  604010 cri.go:89] found id: ""
	I1213 11:53:36.863553  604010 logs.go:282] 0 containers: []
	W1213 11:53:36.863562  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:36.863569  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:36.863629  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:36.888886  604010 cri.go:89] found id: ""
	I1213 11:53:36.888912  604010 logs.go:282] 0 containers: []
	W1213 11:53:36.888920  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:36.888926  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:36.888992  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:36.917481  604010 cri.go:89] found id: ""
	I1213 11:53:36.917566  604010 logs.go:282] 0 containers: []
	W1213 11:53:36.917589  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:36.917608  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:36.917708  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:36.951605  604010 cri.go:89] found id: ""
	I1213 11:53:36.951676  604010 logs.go:282] 0 containers: []
	W1213 11:53:36.951698  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:36.951716  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:36.951808  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:36.980776  604010 cri.go:89] found id: ""
	I1213 11:53:36.980798  604010 logs.go:282] 0 containers: []
	W1213 11:53:36.980807  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:36.980814  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:36.980878  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:37.014102  604010 cri.go:89] found id: ""
	I1213 11:53:37.014129  604010 logs.go:282] 0 containers: []
	W1213 11:53:37.014139  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:37.014146  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:37.014218  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:37.041045  604010 cri.go:89] found id: ""
	I1213 11:53:37.041068  604010 logs.go:282] 0 containers: []
	W1213 11:53:37.041076  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:37.041086  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:37.041099  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:37.057607  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:37.057677  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:37.123513  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:37.114613    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.115389    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.117143    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.117811    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.119588    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:37.114613    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.115389    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.117143    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.117811    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:37.119588    3451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:37.123585  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:37.123612  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:37.149745  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:37.149782  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:37.190123  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:37.190160  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:39.753400  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:39.766329  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:39.766428  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:39.794895  604010 cri.go:89] found id: ""
	I1213 11:53:39.794979  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.794995  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:39.795003  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:39.795077  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:39.819418  604010 cri.go:89] found id: ""
	I1213 11:53:39.819444  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.819453  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:39.819462  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:39.819522  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:39.847949  604010 cri.go:89] found id: ""
	I1213 11:53:39.847976  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.847985  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:39.847992  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:39.848064  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:39.872978  604010 cri.go:89] found id: ""
	I1213 11:53:39.873009  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.873018  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:39.873025  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:39.873091  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:39.900210  604010 cri.go:89] found id: ""
	I1213 11:53:39.900236  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.900245  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:39.900252  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:39.900311  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:39.934251  604010 cri.go:89] found id: ""
	I1213 11:53:39.934276  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.934285  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:39.934291  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:39.934351  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:39.964389  604010 cri.go:89] found id: ""
	I1213 11:53:39.964416  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.964425  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:39.964431  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:39.964496  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:39.995412  604010 cri.go:89] found id: ""
	I1213 11:53:39.995435  604010 logs.go:282] 0 containers: []
	W1213 11:53:39.995444  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:39.995454  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:39.995466  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:40.074600  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:40.074644  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:40.093065  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:40.093143  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:40.162566  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:40.153392    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.154048    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.155849    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.156585    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.158356    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:40.153392    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.154048    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.155849    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.156585    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:40.158356    3563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:40.162633  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:40.162659  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:40.191469  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:40.191548  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:42.738325  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:42.749369  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:42.749435  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:42.776660  604010 cri.go:89] found id: ""
	I1213 11:53:42.776686  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.776695  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:42.776701  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:42.776761  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:42.802014  604010 cri.go:89] found id: ""
	I1213 11:53:42.802042  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.802051  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:42.802057  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:42.802116  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:42.826554  604010 cri.go:89] found id: ""
	I1213 11:53:42.826583  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.826592  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:42.826598  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:42.826659  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:42.853269  604010 cri.go:89] found id: ""
	I1213 11:53:42.853296  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.853305  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:42.853319  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:42.853384  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:42.880122  604010 cri.go:89] found id: ""
	I1213 11:53:42.880150  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.880159  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:42.880166  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:42.880227  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:42.904811  604010 cri.go:89] found id: ""
	I1213 11:53:42.904834  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.904843  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:42.904850  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:42.904908  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:42.930715  604010 cri.go:89] found id: ""
	I1213 11:53:42.930744  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.930753  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:42.930759  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:42.930815  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:42.964092  604010 cri.go:89] found id: ""
	I1213 11:53:42.964115  604010 logs.go:282] 0 containers: []
	W1213 11:53:42.964123  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:42.964132  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:42.964144  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:42.994219  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:42.994254  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:43.031007  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:43.031036  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:43.086377  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:43.086412  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:43.103185  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:43.103216  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:43.180526  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:43.171640    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.172414    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.174057    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.174649    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.176278    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:43.171640    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.172414    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.174057    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.174649    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:43.176278    3695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:45.681512  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:45.691980  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:45.692050  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:45.720468  604010 cri.go:89] found id: ""
	I1213 11:53:45.720494  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.720503  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:45.720509  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:45.720566  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:45.745270  604010 cri.go:89] found id: ""
	I1213 11:53:45.745297  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.745305  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:45.745312  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:45.745371  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:45.771959  604010 cri.go:89] found id: ""
	I1213 11:53:45.771989  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.771998  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:45.772005  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:45.772063  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:45.797561  604010 cri.go:89] found id: ""
	I1213 11:53:45.797588  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.797597  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:45.797604  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:45.797666  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:45.821937  604010 cri.go:89] found id: ""
	I1213 11:53:45.821965  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.821975  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:45.821981  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:45.822041  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:45.854390  604010 cri.go:89] found id: ""
	I1213 11:53:45.854414  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.854423  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:45.854430  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:45.854489  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:45.879570  604010 cri.go:89] found id: ""
	I1213 11:53:45.879597  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.879616  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:45.879623  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:45.879681  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:45.904307  604010 cri.go:89] found id: ""
	I1213 11:53:45.904335  604010 logs.go:282] 0 containers: []
	W1213 11:53:45.904344  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:45.904354  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:45.904364  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:45.971467  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:45.971554  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:45.988842  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:45.988868  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:46.054484  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:46.046672    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.047076    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.048668    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.049161    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.050614    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:46.046672    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.047076    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.048668    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.049161    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:46.050614    3795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:46.054553  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:46.054579  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:46.079997  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:46.080032  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:48.608207  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:48.618848  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:48.618926  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:48.644320  604010 cri.go:89] found id: ""
	I1213 11:53:48.644344  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.644352  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:48.644359  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:48.644420  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:48.669194  604010 cri.go:89] found id: ""
	I1213 11:53:48.669226  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.669236  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:48.669242  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:48.669308  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:48.694072  604010 cri.go:89] found id: ""
	I1213 11:53:48.694097  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.694107  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:48.694113  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:48.694188  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:48.718654  604010 cri.go:89] found id: ""
	I1213 11:53:48.718679  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.718720  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:48.718727  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:48.718800  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:48.742539  604010 cri.go:89] found id: ""
	I1213 11:53:48.742571  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.742580  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:48.742587  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:48.742660  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:48.771087  604010 cri.go:89] found id: ""
	I1213 11:53:48.771111  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.771120  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:48.771126  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:48.771185  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:48.797732  604010 cri.go:89] found id: ""
	I1213 11:53:48.797755  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.797764  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:48.797770  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:48.797834  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:48.822805  604010 cri.go:89] found id: ""
	I1213 11:53:48.822830  604010 logs.go:282] 0 containers: []
	W1213 11:53:48.822839  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:48.822849  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:48.822860  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:48.879446  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:48.879514  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:48.895910  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:48.895938  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:48.987206  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:48.978941    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.979739    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.981488    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.981826    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.983267    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:48.978941    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.979739    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.981488    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.981826    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:48.983267    3903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:48.987238  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:48.987251  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:49.014114  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:49.014150  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:50.175475  604010 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:53:50.239481  604010 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 11:53:50.239579  604010 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 11:53:50.242787  604010 out.go:179] * Enabled addons: 
	I1213 11:53:50.245448  604010 addons.go:530] duration metric: took 1m54.618181483s for enable addons: enabled=[]
	I1213 11:53:51.543477  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:51.554449  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:51.554521  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:51.579307  604010 cri.go:89] found id: ""
	I1213 11:53:51.579335  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.579344  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:51.579350  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:51.579411  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:51.605002  604010 cri.go:89] found id: ""
	I1213 11:53:51.605029  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.605040  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:51.605047  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:51.605108  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:51.629728  604010 cri.go:89] found id: ""
	I1213 11:53:51.629761  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.629770  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:51.629777  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:51.629840  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:51.656823  604010 cri.go:89] found id: ""
	I1213 11:53:51.656846  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.656855  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:51.656862  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:51.656919  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:51.684689  604010 cri.go:89] found id: ""
	I1213 11:53:51.684712  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.684721  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:51.684728  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:51.684787  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:51.709741  604010 cri.go:89] found id: ""
	I1213 11:53:51.709768  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.709776  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:51.709784  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:51.709895  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:51.735821  604010 cri.go:89] found id: ""
	I1213 11:53:51.735848  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.735857  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:51.735863  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:51.735922  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:51.765085  604010 cri.go:89] found id: ""
	I1213 11:53:51.765111  604010 logs.go:282] 0 containers: []
	W1213 11:53:51.765120  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:51.765130  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:51.765143  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:51.820951  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:51.820986  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:51.837298  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:51.837448  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:51.903778  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:51.894875    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.895698    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.897404    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.897825    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.899293    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:51.894875    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.895698    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.897404    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.897825    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:51.899293    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:51.903855  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:51.903876  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:51.931477  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:51.931561  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:54.461061  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:54.471768  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:54.471839  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:54.497629  604010 cri.go:89] found id: ""
	I1213 11:53:54.497651  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.497660  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:54.497666  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:54.497725  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:54.523805  604010 cri.go:89] found id: ""
	I1213 11:53:54.523830  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.523839  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:54.523846  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:54.523905  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:54.548988  604010 cri.go:89] found id: ""
	I1213 11:53:54.549012  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.549021  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:54.549027  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:54.549089  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:54.584912  604010 cri.go:89] found id: ""
	I1213 11:53:54.584996  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.585012  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:54.585020  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:54.585094  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:54.613768  604010 cri.go:89] found id: ""
	I1213 11:53:54.613810  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.613822  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:54.613832  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:54.613917  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:54.638498  604010 cri.go:89] found id: ""
	I1213 11:53:54.638523  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.638531  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:54.638539  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:54.638597  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:54.663796  604010 cri.go:89] found id: ""
	I1213 11:53:54.663863  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.663886  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:54.663904  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:54.663994  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:54.688512  604010 cri.go:89] found id: ""
	I1213 11:53:54.688595  604010 logs.go:282] 0 containers: []
	W1213 11:53:54.688612  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:54.688623  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:54.688635  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:54.745122  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:54.745158  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:54.761471  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:54.761502  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:54.827485  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:54.818964    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.819562    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.821065    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.821615    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.823257    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:54.818964    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.819562    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.821065    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.821615    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:54.823257    4132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:54.827506  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:54.827519  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:54.853348  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:54.853383  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:53:57.386439  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:53:57.396996  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:53:57.397067  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:53:57.432425  604010 cri.go:89] found id: ""
	I1213 11:53:57.432451  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.432461  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:53:57.432468  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:53:57.432531  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:53:57.468740  604010 cri.go:89] found id: ""
	I1213 11:53:57.468767  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.468777  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:53:57.468783  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:53:57.468848  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:53:57.496008  604010 cri.go:89] found id: ""
	I1213 11:53:57.496032  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.496041  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:53:57.496053  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:53:57.496113  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:53:57.522430  604010 cri.go:89] found id: ""
	I1213 11:53:57.522454  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.522463  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:53:57.522469  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:53:57.522528  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:53:57.547956  604010 cri.go:89] found id: ""
	I1213 11:53:57.547980  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.547988  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:53:57.547994  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:53:57.548054  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:53:57.573554  604010 cri.go:89] found id: ""
	I1213 11:53:57.573579  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.573589  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:53:57.573596  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:53:57.573658  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:53:57.597400  604010 cri.go:89] found id: ""
	I1213 11:53:57.597428  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.597437  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:53:57.597443  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:53:57.597501  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:53:57.621599  604010 cri.go:89] found id: ""
	I1213 11:53:57.621623  604010 logs.go:282] 0 containers: []
	W1213 11:53:57.621632  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:53:57.621642  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:53:57.621653  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:53:57.677116  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:53:57.677153  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:53:57.692856  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:53:57.692929  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:53:57.758229  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:53:57.748721    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.749368    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.751042    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.751857    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.753632    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:53:57.748721    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.749368    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.751042    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.751857    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:53:57.753632    4239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:53:57.758252  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:53:57.758266  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:53:57.784520  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:53:57.784560  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:00.317292  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:00.352525  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:00.352620  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:00.392603  604010 cri.go:89] found id: ""
	I1213 11:54:00.392636  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.392646  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:00.392654  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:00.392736  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:00.447117  604010 cri.go:89] found id: ""
	I1213 11:54:00.447149  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.447158  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:00.447178  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:00.447281  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:00.479294  604010 cri.go:89] found id: ""
	I1213 11:54:00.479324  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.479333  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:00.479339  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:00.479406  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:00.510064  604010 cri.go:89] found id: ""
	I1213 11:54:00.510092  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.510101  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:00.510108  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:00.510184  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:00.537774  604010 cri.go:89] found id: ""
	I1213 11:54:00.537801  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.537810  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:00.537816  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:00.537877  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:00.563430  604010 cri.go:89] found id: ""
	I1213 11:54:00.563460  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.563469  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:00.563475  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:00.563534  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:00.588470  604010 cri.go:89] found id: ""
	I1213 11:54:00.588495  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.588503  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:00.588510  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:00.588573  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:00.616819  604010 cri.go:89] found id: ""
	I1213 11:54:00.616853  604010 logs.go:282] 0 containers: []
	W1213 11:54:00.616865  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:00.616874  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:00.616887  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:00.632810  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:00.632837  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:00.697200  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:00.688095    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.688902    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.690382    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.690873    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.692718    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:00.688095    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.688902    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.690382    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.690873    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:00.692718    4352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:00.697225  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:00.697239  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:00.722351  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:00.722391  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:00.753453  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:00.753489  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:03.309839  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:03.321093  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:03.321163  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:03.349567  604010 cri.go:89] found id: ""
	I1213 11:54:03.349591  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.349600  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:03.349607  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:03.349667  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:03.374734  604010 cri.go:89] found id: ""
	I1213 11:54:03.374758  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.374767  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:03.374774  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:03.374842  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:03.400074  604010 cri.go:89] found id: ""
	I1213 11:54:03.400099  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.400108  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:03.400114  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:03.400172  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:03.461432  604010 cri.go:89] found id: ""
	I1213 11:54:03.461533  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.461561  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:03.461583  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:03.461673  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:03.504466  604010 cri.go:89] found id: ""
	I1213 11:54:03.504544  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.504566  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:03.504585  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:03.504671  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:03.545459  604010 cri.go:89] found id: ""
	I1213 11:54:03.545482  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.545491  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:03.545497  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:03.545575  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:03.570446  604010 cri.go:89] found id: ""
	I1213 11:54:03.570468  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.570476  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:03.570482  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:03.570539  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:03.595001  604010 cri.go:89] found id: ""
	I1213 11:54:03.595023  604010 logs.go:282] 0 containers: []
	W1213 11:54:03.595031  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:03.595041  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:03.595057  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:03.610922  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:03.610955  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:03.679130  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:03.671134    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.671746    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.673204    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.673644    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.675078    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:03.671134    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.671746    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.673204    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.673644    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:03.675078    4462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:03.679152  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:03.679167  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:03.705484  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:03.705522  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:03.732753  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:03.732778  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:06.289051  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:06.299935  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:06.300031  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:06.325745  604010 cri.go:89] found id: ""
	I1213 11:54:06.325777  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.325787  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:06.325794  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:06.325898  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:06.352273  604010 cri.go:89] found id: ""
	I1213 11:54:06.352342  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.352357  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:06.352365  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:06.352437  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:06.376413  604010 cri.go:89] found id: ""
	I1213 11:54:06.376482  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.376507  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:06.376520  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:06.376596  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:06.406144  604010 cri.go:89] found id: ""
	I1213 11:54:06.406188  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.406198  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:06.406206  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:06.406285  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:06.456311  604010 cri.go:89] found id: ""
	I1213 11:54:06.456388  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.456411  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:06.456430  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:06.456526  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:06.510060  604010 cri.go:89] found id: ""
	I1213 11:54:06.510150  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.510174  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:06.510194  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:06.510310  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:06.542373  604010 cri.go:89] found id: ""
	I1213 11:54:06.542450  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.542472  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:06.542494  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:06.542601  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:06.567983  604010 cri.go:89] found id: ""
	I1213 11:54:06.568063  604010 logs.go:282] 0 containers: []
	W1213 11:54:06.568087  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:06.568104  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:06.568129  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:06.624463  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:06.624498  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:06.640970  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:06.641003  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:06.714019  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:06.704918    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.705767    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.706758    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.708430    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.708734    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:06.704918    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.705767    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.706758    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.708430    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:06.708734    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:06.714096  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:06.714117  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:06.739708  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:06.739748  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:09.268501  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:09.279334  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:09.279413  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:09.308998  604010 cri.go:89] found id: ""
	I1213 11:54:09.309034  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.309043  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:09.309050  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:09.309110  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:09.336921  604010 cri.go:89] found id: ""
	I1213 11:54:09.336947  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.336956  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:09.336963  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:09.337025  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:09.367100  604010 cri.go:89] found id: ""
	I1213 11:54:09.367123  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.367131  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:09.367138  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:09.367196  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:09.392881  604010 cri.go:89] found id: ""
	I1213 11:54:09.392913  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.392922  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:09.392930  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:09.392991  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:09.433300  604010 cri.go:89] found id: ""
	I1213 11:54:09.433330  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.433339  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:09.433345  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:09.433408  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:09.499329  604010 cri.go:89] found id: ""
	I1213 11:54:09.499357  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.499365  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:09.499372  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:09.499434  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:09.526348  604010 cri.go:89] found id: ""
	I1213 11:54:09.526383  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.526392  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:09.526399  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:09.526467  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:09.551552  604010 cri.go:89] found id: ""
	I1213 11:54:09.551585  604010 logs.go:282] 0 containers: []
	W1213 11:54:09.551595  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:09.551605  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:09.551617  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:09.607976  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:09.608011  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:09.624198  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:09.624228  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:09.692042  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:09.683184    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.683833    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.685650    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.686276    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.688111    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:09.683184    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.683833    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.685650    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.686276    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:09.688111    4687 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:09.692065  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:09.692077  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:09.717762  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:09.717799  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:12.251306  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:12.261889  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:12.261958  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:12.286128  604010 cri.go:89] found id: ""
	I1213 11:54:12.286151  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.286160  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:12.286166  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:12.286231  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:12.320955  604010 cri.go:89] found id: ""
	I1213 11:54:12.320982  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.320992  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:12.320999  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:12.321064  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:12.347366  604010 cri.go:89] found id: ""
	I1213 11:54:12.347394  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.347404  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:12.347411  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:12.347475  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:12.372047  604010 cri.go:89] found id: ""
	I1213 11:54:12.372075  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.372084  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:12.372091  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:12.372211  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:12.397441  604010 cri.go:89] found id: ""
	I1213 11:54:12.397466  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.397475  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:12.397482  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:12.397610  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:12.458383  604010 cri.go:89] found id: ""
	I1213 11:54:12.458464  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.458487  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:12.458505  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:12.458610  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:12.499011  604010 cri.go:89] found id: ""
	I1213 11:54:12.499087  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.499110  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:12.499128  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:12.499223  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:12.526019  604010 cri.go:89] found id: ""
	I1213 11:54:12.526048  604010 logs.go:282] 0 containers: []
	W1213 11:54:12.526058  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:12.526068  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:12.526079  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:12.582388  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:12.582425  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:12.598760  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:12.598788  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:12.668226  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:12.659694    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.660116    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.661902    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.662352    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.663961    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:12.659694    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.660116    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.661902    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.662352    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:12.663961    4803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:12.668250  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:12.668263  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:12.698476  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:12.698514  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:15.226309  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:15.237066  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:15.237138  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:15.261808  604010 cri.go:89] found id: ""
	I1213 11:54:15.261836  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.261845  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:15.261851  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:15.261912  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:15.286942  604010 cri.go:89] found id: ""
	I1213 11:54:15.286966  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.286975  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:15.286981  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:15.287066  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:15.311813  604010 cri.go:89] found id: ""
	I1213 11:54:15.311842  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.311852  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:15.311859  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:15.311920  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:15.341088  604010 cri.go:89] found id: ""
	I1213 11:54:15.341116  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.341124  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:15.341131  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:15.341188  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:15.365220  604010 cri.go:89] found id: ""
	I1213 11:54:15.365247  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.365256  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:15.365263  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:15.365319  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:15.389056  604010 cri.go:89] found id: ""
	I1213 11:54:15.389084  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.389093  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:15.389099  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:15.389159  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:15.424168  604010 cri.go:89] found id: ""
	I1213 11:54:15.424197  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.424206  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:15.424215  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:15.424275  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:15.458977  604010 cri.go:89] found id: ""
	I1213 11:54:15.459014  604010 logs.go:282] 0 containers: []
	W1213 11:54:15.459023  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:15.459033  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:15.459045  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:15.488624  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:15.488665  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:15.534272  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:15.534300  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:15.593055  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:15.593092  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:15.609340  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:15.609370  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:15.673503  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:15.664722    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.665497    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.667260    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.667958    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.669529    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:15.664722    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.665497    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.667260    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.667958    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:15.669529    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:18.175202  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:18.185611  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:18.185684  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:18.216571  604010 cri.go:89] found id: ""
	I1213 11:54:18.216598  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.216609  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:18.216616  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:18.216676  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:18.244020  604010 cri.go:89] found id: ""
	I1213 11:54:18.244044  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.244053  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:18.244060  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:18.244125  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:18.269644  604010 cri.go:89] found id: ""
	I1213 11:54:18.269677  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.269686  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:18.269699  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:18.269759  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:18.295049  604010 cri.go:89] found id: ""
	I1213 11:54:18.295074  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.295084  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:18.295092  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:18.295151  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:18.319970  604010 cri.go:89] found id: ""
	I1213 11:54:18.319994  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.320003  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:18.320009  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:18.320068  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:18.348557  604010 cri.go:89] found id: ""
	I1213 11:54:18.348583  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.348591  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:18.348598  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:18.348661  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:18.372733  604010 cri.go:89] found id: ""
	I1213 11:54:18.372759  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.372769  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:18.372775  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:18.372833  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:18.397904  604010 cri.go:89] found id: ""
	I1213 11:54:18.397927  604010 logs.go:282] 0 containers: []
	W1213 11:54:18.397936  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:18.397945  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:18.397958  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:18.475145  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:18.475177  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:18.509115  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:18.509140  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:18.578046  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:18.568558    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.569407    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.571224    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.571849    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.573663    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:18.568558    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.569407    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.571224    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.571849    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:18.573663    5027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:18.578069  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:18.578080  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:18.604022  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:18.604057  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:21.135717  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:21.151653  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:21.151722  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:21.181267  604010 cri.go:89] found id: ""
	I1213 11:54:21.181292  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.181300  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:21.181306  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:21.181363  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:21.211036  604010 cri.go:89] found id: ""
	I1213 11:54:21.211064  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.211073  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:21.211079  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:21.211136  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:21.235057  604010 cri.go:89] found id: ""
	I1213 11:54:21.235082  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.235091  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:21.235097  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:21.235158  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:21.259604  604010 cri.go:89] found id: ""
	I1213 11:54:21.259629  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.259637  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:21.259644  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:21.259710  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:21.284921  604010 cri.go:89] found id: ""
	I1213 11:54:21.284948  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.284957  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:21.284963  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:21.285022  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:21.311134  604010 cri.go:89] found id: ""
	I1213 11:54:21.311162  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.311171  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:21.311178  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:21.311238  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:21.337100  604010 cri.go:89] found id: ""
	I1213 11:54:21.337124  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.337133  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:21.337140  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:21.337201  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:21.361945  604010 cri.go:89] found id: ""
	I1213 11:54:21.361969  604010 logs.go:282] 0 containers: []
	W1213 11:54:21.361977  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:21.361987  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:21.362001  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:21.424925  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:21.424964  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:21.442370  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:21.442449  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:21.544421  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:21.527951    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.529143    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.530038    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.535082    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.535420    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:21.527951    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.529143    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.530038    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.535082    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:21.535420    5142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:21.544487  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:21.544508  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:21.569861  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:21.569899  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:24.098574  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:24.109255  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:24.109328  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:24.135881  604010 cri.go:89] found id: ""
	I1213 11:54:24.135904  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.135913  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:24.135919  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:24.135976  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:24.160249  604010 cri.go:89] found id: ""
	I1213 11:54:24.160272  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.160281  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:24.160294  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:24.160356  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:24.185097  604010 cri.go:89] found id: ""
	I1213 11:54:24.185120  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.185129  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:24.185136  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:24.185197  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:24.210052  604010 cri.go:89] found id: ""
	I1213 11:54:24.210133  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.210156  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:24.210174  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:24.210263  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:24.234868  604010 cri.go:89] found id: ""
	I1213 11:54:24.234895  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.234905  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:24.234912  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:24.234968  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:24.258998  604010 cri.go:89] found id: ""
	I1213 11:54:24.259023  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.259032  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:24.259039  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:24.259099  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:24.282644  604010 cri.go:89] found id: ""
	I1213 11:54:24.282672  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.282713  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:24.282721  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:24.282780  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:24.312793  604010 cri.go:89] found id: ""
	I1213 11:54:24.312822  604010 logs.go:282] 0 containers: []
	W1213 11:54:24.312831  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:24.312841  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:24.312853  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:24.328614  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:24.328643  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:24.398953  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:24.390748    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.391466    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.392548    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.393304    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.394893    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:24.390748    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.391466    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.392548    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.393304    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:24.394893    5250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:24.398978  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:24.398992  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:24.447276  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:24.447353  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:24.512358  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:24.512384  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:27.079756  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:27.090085  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:27.090157  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:27.114934  604010 cri.go:89] found id: ""
	I1213 11:54:27.114957  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.114966  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:27.114972  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:27.115032  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:27.139399  604010 cri.go:89] found id: ""
	I1213 11:54:27.139424  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.139433  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:27.139439  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:27.139496  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:27.164348  604010 cri.go:89] found id: ""
	I1213 11:54:27.164371  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.164379  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:27.164385  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:27.164443  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:27.189263  604010 cri.go:89] found id: ""
	I1213 11:54:27.189286  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.189294  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:27.189302  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:27.189362  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:27.214003  604010 cri.go:89] found id: ""
	I1213 11:54:27.214076  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.214101  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:27.214121  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:27.214204  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:27.238568  604010 cri.go:89] found id: ""
	I1213 11:54:27.238632  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.238657  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:27.238675  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:27.238861  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:27.263827  604010 cri.go:89] found id: ""
	I1213 11:54:27.263850  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.263858  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:27.263864  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:27.263941  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:27.293643  604010 cri.go:89] found id: ""
	I1213 11:54:27.293672  604010 logs.go:282] 0 containers: []
	W1213 11:54:27.293680  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:27.293691  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:27.293706  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:27.353462  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:27.353498  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:27.369639  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:27.369723  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:27.462957  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:27.448639    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.449130    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.455578    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.456379    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.459064    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:27.448639    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.449130    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.455578    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.456379    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:27.459064    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:27.462984  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:27.463007  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:27.502080  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:27.502115  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:30.033979  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:30.048817  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:30.048921  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:30.086312  604010 cri.go:89] found id: ""
	I1213 11:54:30.086343  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.086353  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:30.086361  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:30.086431  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:30.118027  604010 cri.go:89] found id: ""
	I1213 11:54:30.118056  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.118066  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:30.118073  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:30.118139  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:30.150398  604010 cri.go:89] found id: ""
	I1213 11:54:30.150422  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.150431  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:30.150437  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:30.150501  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:30.176994  604010 cri.go:89] found id: ""
	I1213 11:54:30.177024  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.177033  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:30.177040  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:30.177102  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:30.204667  604010 cri.go:89] found id: ""
	I1213 11:54:30.204692  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.204702  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:30.204709  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:30.204768  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:30.233311  604010 cri.go:89] found id: ""
	I1213 11:54:30.233340  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.233350  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:30.233357  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:30.233443  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:30.258722  604010 cri.go:89] found id: ""
	I1213 11:54:30.258749  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.258759  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:30.258766  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:30.258828  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:30.284738  604010 cri.go:89] found id: ""
	I1213 11:54:30.284766  604010 logs.go:282] 0 containers: []
	W1213 11:54:30.284775  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:30.284785  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:30.284797  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:30.352842  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:30.344108    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.344689    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.346232    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.346735    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.348264    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:30.344108    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.344689    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.346232    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.346735    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:30.348264    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:30.352861  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:30.352873  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:30.377958  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:30.377993  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:30.409746  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:30.409777  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:30.497989  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:30.498042  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:33.019623  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:33.030945  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:33.031018  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:33.060411  604010 cri.go:89] found id: ""
	I1213 11:54:33.060436  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.060445  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:33.060452  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:33.060514  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:33.085659  604010 cri.go:89] found id: ""
	I1213 11:54:33.085684  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.085693  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:33.085700  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:33.085762  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:33.110577  604010 cri.go:89] found id: ""
	I1213 11:54:33.110603  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.110612  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:33.110618  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:33.110676  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:33.140224  604010 cri.go:89] found id: ""
	I1213 11:54:33.140252  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.140261  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:33.140267  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:33.140328  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:33.165441  604010 cri.go:89] found id: ""
	I1213 11:54:33.165467  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.165477  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:33.165483  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:33.165574  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:33.191299  604010 cri.go:89] found id: ""
	I1213 11:54:33.191324  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.191332  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:33.191339  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:33.191400  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:33.216285  604010 cri.go:89] found id: ""
	I1213 11:54:33.216311  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.216320  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:33.216327  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:33.216386  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:33.241156  604010 cri.go:89] found id: ""
	I1213 11:54:33.241180  604010 logs.go:282] 0 containers: []
	W1213 11:54:33.241189  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:33.241199  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:33.241210  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:33.269984  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:33.270014  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:33.326746  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:33.326782  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:33.343845  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:33.343874  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:33.421478  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:33.403624    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.404936    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.405920    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.407713    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.408279    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:33.403624    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.404936    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.405920    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.407713    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:33.408279    5606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:33.421564  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:33.421594  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:35.956688  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:35.967776  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:35.967847  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:35.992715  604010 cri.go:89] found id: ""
	I1213 11:54:35.992745  604010 logs.go:282] 0 containers: []
	W1213 11:54:35.992753  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:35.992760  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:35.992821  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:36.030819  604010 cri.go:89] found id: ""
	I1213 11:54:36.030854  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.030864  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:36.030870  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:36.030940  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:36.056512  604010 cri.go:89] found id: ""
	I1213 11:54:36.056537  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.056547  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:36.056553  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:36.056613  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:36.083355  604010 cri.go:89] found id: ""
	I1213 11:54:36.083381  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.083390  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:36.083397  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:36.083458  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:36.109765  604010 cri.go:89] found id: ""
	I1213 11:54:36.109791  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.109799  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:36.109806  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:36.109866  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:36.139001  604010 cri.go:89] found id: ""
	I1213 11:54:36.139030  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.139040  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:36.139048  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:36.139109  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:36.164252  604010 cri.go:89] found id: ""
	I1213 11:54:36.164280  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.164290  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:36.164297  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:36.164419  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:36.193554  604010 cri.go:89] found id: ""
	I1213 11:54:36.193579  604010 logs.go:282] 0 containers: []
	W1213 11:54:36.193588  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:36.193597  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:36.193609  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:36.225514  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:36.225555  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:36.284505  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:36.284551  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:36.300602  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:36.300632  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:36.368620  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:36.358956    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.360036    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.361784    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.362389    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.364078    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:36.358956    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.360036    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.361784    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.362389    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:36.364078    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:36.368642  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:36.368654  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:38.894313  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:38.906401  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:38.906478  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:38.931173  604010 cri.go:89] found id: ""
	I1213 11:54:38.931200  604010 logs.go:282] 0 containers: []
	W1213 11:54:38.931210  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:38.931217  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:38.931280  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:38.957289  604010 cri.go:89] found id: ""
	I1213 11:54:38.957315  604010 logs.go:282] 0 containers: []
	W1213 11:54:38.957324  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:38.957330  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:38.957391  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:38.984282  604010 cri.go:89] found id: ""
	I1213 11:54:38.984307  604010 logs.go:282] 0 containers: []
	W1213 11:54:38.984317  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:38.984323  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:38.984402  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:39.012924  604010 cri.go:89] found id: ""
	I1213 11:54:39.012994  604010 logs.go:282] 0 containers: []
	W1213 11:54:39.013012  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:39.013021  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:39.013085  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:39.039025  604010 cri.go:89] found id: ""
	I1213 11:54:39.039062  604010 logs.go:282] 0 containers: []
	W1213 11:54:39.039071  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:39.039077  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:39.039145  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:39.066984  604010 cri.go:89] found id: ""
	I1213 11:54:39.067009  604010 logs.go:282] 0 containers: []
	W1213 11:54:39.067018  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:39.067024  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:39.067088  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:39.093147  604010 cri.go:89] found id: ""
	I1213 11:54:39.093172  604010 logs.go:282] 0 containers: []
	W1213 11:54:39.093181  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:39.093188  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:39.093247  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:39.120841  604010 cri.go:89] found id: ""
	I1213 11:54:39.120866  604010 logs.go:282] 0 containers: []
	W1213 11:54:39.120875  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:39.120884  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:39.120896  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:39.177077  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:39.177113  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:39.193258  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:39.193284  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:39.255506  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:39.246949    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.247600    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.249297    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.249837    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.251408    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:39.246949    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.247600    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.249297    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.249837    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:39.251408    5824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:39.255531  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:39.255546  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:39.280959  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:39.280995  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:41.808371  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:41.820751  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:41.820829  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:41.847226  604010 cri.go:89] found id: ""
	I1213 11:54:41.847249  604010 logs.go:282] 0 containers: []
	W1213 11:54:41.847258  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:41.847264  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:41.847322  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:41.873405  604010 cri.go:89] found id: ""
	I1213 11:54:41.873436  604010 logs.go:282] 0 containers: []
	W1213 11:54:41.873448  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:41.873455  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:41.873519  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:41.899479  604010 cri.go:89] found id: ""
	I1213 11:54:41.899509  604010 logs.go:282] 0 containers: []
	W1213 11:54:41.899518  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:41.899524  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:41.899582  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:41.923515  604010 cri.go:89] found id: ""
	I1213 11:54:41.923545  604010 logs.go:282] 0 containers: []
	W1213 11:54:41.923554  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:41.923561  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:41.923621  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:41.952086  604010 cri.go:89] found id: ""
	I1213 11:54:41.952110  604010 logs.go:282] 0 containers: []
	W1213 11:54:41.952119  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:41.952125  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:41.952182  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:41.976613  604010 cri.go:89] found id: ""
	I1213 11:54:41.976637  604010 logs.go:282] 0 containers: []
	W1213 11:54:41.976646  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:41.976653  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:41.976714  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:42.010402  604010 cri.go:89] found id: ""
	I1213 11:54:42.010434  604010 logs.go:282] 0 containers: []
	W1213 11:54:42.010443  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:42.010450  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:42.010520  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:42.038928  604010 cri.go:89] found id: ""
	I1213 11:54:42.038955  604010 logs.go:282] 0 containers: []
	W1213 11:54:42.038964  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:42.038974  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:42.038985  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:42.096963  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:42.097004  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:42.115172  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:42.115213  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:42.192959  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:42.182320    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.183391    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.184373    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.186141    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.186781    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:42.182320    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.183391    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.184373    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.186141    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:42.186781    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:42.192981  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:42.192995  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:42.219986  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:42.220023  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:44.750998  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:44.761521  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:44.761601  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:44.785581  604010 cri.go:89] found id: ""
	I1213 11:54:44.785609  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.785618  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:44.785625  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:44.785681  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:44.810312  604010 cri.go:89] found id: ""
	I1213 11:54:44.810340  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.810349  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:44.810356  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:44.810419  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:44.834980  604010 cri.go:89] found id: ""
	I1213 11:54:44.835004  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.835012  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:44.835018  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:44.835082  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:44.868160  604010 cri.go:89] found id: ""
	I1213 11:54:44.868187  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.868196  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:44.868203  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:44.868263  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:44.893689  604010 cri.go:89] found id: ""
	I1213 11:54:44.893715  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.893723  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:44.893730  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:44.893788  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:44.918090  604010 cri.go:89] found id: ""
	I1213 11:54:44.918119  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.918128  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:44.918135  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:44.918196  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:44.944994  604010 cri.go:89] found id: ""
	I1213 11:54:44.945022  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.945032  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:44.945038  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:44.945102  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:44.969862  604010 cri.go:89] found id: ""
	I1213 11:54:44.969891  604010 logs.go:282] 0 containers: []
	W1213 11:54:44.969900  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:44.969910  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:44.969921  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:45.027468  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:45.027521  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:45.054117  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:45.054213  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:45.178092  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:45.159739    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.160529    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.166319    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.166867    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.169009    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:45.159739    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.160529    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.166319    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.166867    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:45.169009    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:45.178126  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:45.178168  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:45.209407  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:45.209462  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:47.757891  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:47.768440  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:47.768511  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:47.797232  604010 cri.go:89] found id: ""
	I1213 11:54:47.797258  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.797267  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:47.797274  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:47.797331  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:47.822035  604010 cri.go:89] found id: ""
	I1213 11:54:47.822059  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.822068  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:47.822074  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:47.822139  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:47.850594  604010 cri.go:89] found id: ""
	I1213 11:54:47.850619  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.850627  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:47.850634  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:47.850715  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:47.875934  604010 cri.go:89] found id: ""
	I1213 11:54:47.875958  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.875967  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:47.875975  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:47.876036  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:47.904019  604010 cri.go:89] found id: ""
	I1213 11:54:47.904043  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.904051  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:47.904058  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:47.904122  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:47.928717  604010 cri.go:89] found id: ""
	I1213 11:54:47.928743  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.928751  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:47.928758  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:47.928818  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:47.953107  604010 cri.go:89] found id: ""
	I1213 11:54:47.953135  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.953144  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:47.953152  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:47.953228  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:47.977855  604010 cri.go:89] found id: ""
	I1213 11:54:47.977891  604010 logs.go:282] 0 containers: []
	W1213 11:54:47.977900  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:47.977910  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:47.977940  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:48.033045  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:48.033085  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:48.049516  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:48.049571  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:48.119802  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:48.111384    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.112145    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.113839    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.114220    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.115737    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:48.111384    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.112145    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.113839    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.114220    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:48.115737    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:48.119824  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:48.119837  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:48.144575  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:48.144606  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:50.674890  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:50.689012  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:50.689130  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:50.747025  604010 cri.go:89] found id: ""
	I1213 11:54:50.747102  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.747125  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:50.747143  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:50.747232  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:50.775729  604010 cri.go:89] found id: ""
	I1213 11:54:50.775795  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.775812  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:50.775820  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:50.775887  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:50.799251  604010 cri.go:89] found id: ""
	I1213 11:54:50.799277  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.799286  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:50.799292  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:50.799380  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:50.822964  604010 cri.go:89] found id: ""
	I1213 11:54:50.823033  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.823047  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:50.823054  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:50.823125  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:50.851245  604010 cri.go:89] found id: ""
	I1213 11:54:50.851270  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.851279  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:50.851285  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:50.851346  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:50.877382  604010 cri.go:89] found id: ""
	I1213 11:54:50.877405  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.877414  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:50.877420  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:50.877478  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:50.903657  604010 cri.go:89] found id: ""
	I1213 11:54:50.903681  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.903690  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:50.903696  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:50.903754  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:50.931954  604010 cri.go:89] found id: ""
	I1213 11:54:50.931977  604010 logs.go:282] 0 containers: []
	W1213 11:54:50.931992  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:50.932002  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:50.932016  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:50.988153  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:50.988188  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:51.004868  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:51.004912  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:51.078536  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:51.069572    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.070163    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.071963    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.072503    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.074005    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:51.069572    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.070163    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.071963    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.072503    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:51.074005    6280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:51.078558  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:51.078571  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:51.105933  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:51.105979  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:53.638010  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:53.648726  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:53.648799  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:53.692658  604010 cri.go:89] found id: ""
	I1213 11:54:53.692685  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.692693  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:53.692700  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:53.692760  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:53.728295  604010 cri.go:89] found id: ""
	I1213 11:54:53.728326  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.728335  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:53.728343  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:53.728402  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:53.768548  604010 cri.go:89] found id: ""
	I1213 11:54:53.768576  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.768585  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:53.768591  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:53.768649  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:53.808130  604010 cri.go:89] found id: ""
	I1213 11:54:53.808152  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.808161  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:53.808167  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:53.808231  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:53.832811  604010 cri.go:89] found id: ""
	I1213 11:54:53.832839  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.832849  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:53.832856  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:53.832916  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:53.857746  604010 cri.go:89] found id: ""
	I1213 11:54:53.857770  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.857778  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:53.857785  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:53.857844  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:53.881722  604010 cri.go:89] found id: ""
	I1213 11:54:53.881747  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.881756  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:53.881763  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:53.881830  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:53.907820  604010 cri.go:89] found id: ""
	I1213 11:54:53.907844  604010 logs.go:282] 0 containers: []
	W1213 11:54:53.907854  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:53.907864  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:53.907877  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:53.963717  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:53.963753  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:53.979615  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:53.979645  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:54.065903  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:54.056577    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.057248    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.058603    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.059235    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.061166    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:54.056577    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.057248    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.058603    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.059235    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:54.061166    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:54.065924  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:54.065938  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:54.091653  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:54.091689  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:56.621960  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:56.633738  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:56.633810  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:56.692820  604010 cri.go:89] found id: ""
	I1213 11:54:56.692846  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.692856  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:56.692863  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:56.692924  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:56.758799  604010 cri.go:89] found id: ""
	I1213 11:54:56.758842  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.758870  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:56.758884  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:56.758978  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:56.784490  604010 cri.go:89] found id: ""
	I1213 11:54:56.784516  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.784525  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:56.784532  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:56.784593  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:56.808898  604010 cri.go:89] found id: ""
	I1213 11:54:56.808919  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.808928  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:56.808940  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:56.808998  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:56.833308  604010 cri.go:89] found id: ""
	I1213 11:54:56.833373  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.833398  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:56.833416  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:56.833489  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:56.862468  604010 cri.go:89] found id: ""
	I1213 11:54:56.862543  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.862568  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:56.862588  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:56.862678  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:56.891924  604010 cri.go:89] found id: ""
	I1213 11:54:56.891952  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.891962  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:56.891969  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:56.892033  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:56.916269  604010 cri.go:89] found id: ""
	I1213 11:54:56.916296  604010 logs.go:282] 0 containers: []
	W1213 11:54:56.916306  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:56.916315  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:56.916327  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:54:56.980544  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:54:56.971761    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.972786    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.974371    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.974958    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.976490    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:54:56.971761    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.972786    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.974371    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.974958    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:54:56.976490    6500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:54:56.980565  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:54:56.980579  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:54:57.005423  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:54:57.005460  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:54:57.032993  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:57.033071  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:57.088966  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:57.089003  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:59.606260  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:54:59.617007  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:54:59.617079  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:54:59.644389  604010 cri.go:89] found id: ""
	I1213 11:54:59.644411  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.644420  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:54:59.644427  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:54:59.644484  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:54:59.689247  604010 cri.go:89] found id: ""
	I1213 11:54:59.689273  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.689282  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:54:59.689289  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:54:59.689348  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:54:59.729540  604010 cri.go:89] found id: ""
	I1213 11:54:59.729582  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.729591  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:54:59.729597  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:54:59.729658  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:54:59.759256  604010 cri.go:89] found id: ""
	I1213 11:54:59.759286  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.759295  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:54:59.759301  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:54:59.759362  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:54:59.788748  604010 cri.go:89] found id: ""
	I1213 11:54:59.788772  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.788780  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:54:59.788787  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:54:59.788846  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:54:59.817278  604010 cri.go:89] found id: ""
	I1213 11:54:59.817313  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.817322  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:54:59.817328  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:54:59.817389  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:54:59.842756  604010 cri.go:89] found id: ""
	I1213 11:54:59.842780  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.842788  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:54:59.842794  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:54:59.842862  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:54:59.868412  604010 cri.go:89] found id: ""
	I1213 11:54:59.868435  604010 logs.go:282] 0 containers: []
	W1213 11:54:59.868443  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:54:59.868453  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:54:59.868464  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:54:59.924773  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:54:59.924808  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:54:59.940672  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:54:59.940704  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:00.041026  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:00.001683    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.002326    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.007036    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.009108    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.010359    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:00.001683    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.002326    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.007036    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.009108    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:00.010359    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:00.045695  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:00.045733  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:00.200188  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:00.200291  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:02.798329  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:02.808984  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:02.809067  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:02.836650  604010 cri.go:89] found id: ""
	I1213 11:55:02.836675  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.836684  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:02.836692  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:02.836755  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:02.861812  604010 cri.go:89] found id: ""
	I1213 11:55:02.861837  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.861846  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:02.861853  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:02.861915  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:02.892956  604010 cri.go:89] found id: ""
	I1213 11:55:02.892982  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.892992  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:02.892999  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:02.893061  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:02.921418  604010 cri.go:89] found id: ""
	I1213 11:55:02.921444  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.921454  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:02.921460  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:02.921517  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:02.945971  604010 cri.go:89] found id: ""
	I1213 11:55:02.945998  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.946007  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:02.946013  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:02.946071  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:02.971224  604010 cri.go:89] found id: ""
	I1213 11:55:02.971249  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.971258  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:02.971264  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:02.971322  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:02.996070  604010 cri.go:89] found id: ""
	I1213 11:55:02.996098  604010 logs.go:282] 0 containers: []
	W1213 11:55:02.996107  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:02.996113  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:02.996175  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:03.026595  604010 cri.go:89] found id: ""
	I1213 11:55:03.026628  604010 logs.go:282] 0 containers: []
	W1213 11:55:03.026637  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:03.026647  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:03.026662  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:03.083030  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:03.083068  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:03.099216  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:03.099247  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:03.164245  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:03.155657    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.156486    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.158171    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.158870    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.160386    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:03.155657    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.156486    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.158171    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.158870    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:03.160386    6728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:03.164269  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:03.164287  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:03.190063  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:03.190105  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:05.717488  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:05.729517  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:05.729651  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:05.754839  604010 cri.go:89] found id: ""
	I1213 11:55:05.754862  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.754870  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:05.754877  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:05.754935  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:05.779444  604010 cri.go:89] found id: ""
	I1213 11:55:05.779470  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.779478  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:05.779486  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:05.779546  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:05.804435  604010 cri.go:89] found id: ""
	I1213 11:55:05.804460  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.804468  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:05.804475  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:05.804536  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:05.828365  604010 cri.go:89] found id: ""
	I1213 11:55:05.828431  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.828454  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:05.828473  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:05.828538  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:05.853088  604010 cri.go:89] found id: ""
	I1213 11:55:05.853114  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.853123  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:05.853129  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:05.853187  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:05.881265  604010 cri.go:89] found id: ""
	I1213 11:55:05.881288  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.881297  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:05.881303  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:05.881363  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:05.907771  604010 cri.go:89] found id: ""
	I1213 11:55:05.907795  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.907804  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:05.907811  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:05.907881  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:05.932155  604010 cri.go:89] found id: ""
	I1213 11:55:05.932181  604010 logs.go:282] 0 containers: []
	W1213 11:55:05.932189  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:05.932199  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:05.932211  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:05.960440  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:05.960467  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:06.018319  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:06.018357  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:06.034573  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:06.034602  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:06.099936  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:06.091153    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.091939    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.093705    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.094323    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.095974    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:06.091153    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.091939    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.093705    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.094323    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:06.095974    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:06.099962  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:06.099975  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:08.626581  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:08.637490  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:08.637574  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:08.674556  604010 cri.go:89] found id: ""
	I1213 11:55:08.674581  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.674589  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:08.674598  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:08.674659  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:08.719063  604010 cri.go:89] found id: ""
	I1213 11:55:08.719087  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.719095  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:08.719101  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:08.719166  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:08.761839  604010 cri.go:89] found id: ""
	I1213 11:55:08.761863  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.761872  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:08.761878  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:08.761939  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:08.793242  604010 cri.go:89] found id: ""
	I1213 11:55:08.793266  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.793274  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:08.793281  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:08.793338  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:08.823380  604010 cri.go:89] found id: ""
	I1213 11:55:08.823406  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.823416  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:08.823424  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:08.823488  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:08.849669  604010 cri.go:89] found id: ""
	I1213 11:55:08.849696  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.849705  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:08.849712  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:08.849773  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:08.876618  604010 cri.go:89] found id: ""
	I1213 11:55:08.876684  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.876707  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:08.876726  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:08.876807  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:08.902762  604010 cri.go:89] found id: ""
	I1213 11:55:08.902802  604010 logs.go:282] 0 containers: []
	W1213 11:55:08.902811  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:08.902820  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:08.902833  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:08.918880  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:08.918910  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:08.990155  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:08.981658    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.982141    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.984095    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.984454    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.986001    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:08.981658    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.982141    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.984095    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.984454    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:08.986001    6952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:08.990182  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:08.990196  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:09.017239  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:09.017278  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:09.049754  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:09.049785  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:11.607272  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:11.617804  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:11.617876  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:11.646336  604010 cri.go:89] found id: ""
	I1213 11:55:11.646359  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.646368  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:11.646374  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:11.646434  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:11.684464  604010 cri.go:89] found id: ""
	I1213 11:55:11.684490  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.684499  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:11.684505  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:11.684566  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:11.724793  604010 cri.go:89] found id: ""
	I1213 11:55:11.724816  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.724824  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:11.724831  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:11.724890  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:11.760776  604010 cri.go:89] found id: ""
	I1213 11:55:11.760799  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.760807  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:11.760814  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:11.760873  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:11.787122  604010 cri.go:89] found id: ""
	I1213 11:55:11.787195  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.787217  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:11.787237  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:11.787333  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:11.812257  604010 cri.go:89] found id: ""
	I1213 11:55:11.812283  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.812291  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:11.812298  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:11.812359  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:11.837304  604010 cri.go:89] found id: ""
	I1213 11:55:11.837341  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.837350  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:11.837356  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:11.837427  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:11.861726  604010 cri.go:89] found id: ""
	I1213 11:55:11.861759  604010 logs.go:282] 0 containers: []
	W1213 11:55:11.861768  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:11.861778  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:11.861792  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:11.918248  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:11.918285  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:11.934535  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:11.934571  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:12.005308  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:11.993379    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.994149    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.995831    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.996328    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.998145    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:11.993379    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.994149    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.995831    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.996328    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:11.998145    7063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:12.005338  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:12.005351  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:12.031381  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:12.031415  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:14.558358  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:14.569230  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:14.569297  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:14.594108  604010 cri.go:89] found id: ""
	I1213 11:55:14.594186  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.594209  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:14.594231  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:14.594306  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:14.617763  604010 cri.go:89] found id: ""
	I1213 11:55:14.617784  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.617818  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:14.617824  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:14.617882  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:14.641477  604010 cri.go:89] found id: ""
	I1213 11:55:14.641499  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.641508  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:14.641514  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:14.641580  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:14.706320  604010 cri.go:89] found id: ""
	I1213 11:55:14.706395  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.706419  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:14.706438  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:14.706530  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:14.750579  604010 cri.go:89] found id: ""
	I1213 11:55:14.750602  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.750611  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:14.750617  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:14.750738  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:14.777264  604010 cri.go:89] found id: ""
	I1213 11:55:14.777299  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.777308  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:14.777321  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:14.777392  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:14.801675  604010 cri.go:89] found id: ""
	I1213 11:55:14.801750  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.801775  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:14.801794  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:14.801878  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:14.826273  604010 cri.go:89] found id: ""
	I1213 11:55:14.826308  604010 logs.go:282] 0 containers: []
	W1213 11:55:14.826317  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:14.826327  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:14.826341  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:14.852456  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:14.852492  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:14.880309  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:14.880337  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:14.935692  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:14.935727  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:14.952137  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:14.952167  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:15.033989  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:15.011900    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.014560    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.015092    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.017168    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.018209    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:15.011900    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.014560    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.015092    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.017168    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:15.018209    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:17.535599  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:17.547401  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:17.547477  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:17.573160  604010 cri.go:89] found id: ""
	I1213 11:55:17.573190  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.573199  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:17.573206  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:17.573269  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:17.602638  604010 cri.go:89] found id: ""
	I1213 11:55:17.602664  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.602673  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:17.602679  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:17.602761  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:17.628217  604010 cri.go:89] found id: ""
	I1213 11:55:17.628242  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.628251  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:17.628258  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:17.628321  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:17.653857  604010 cri.go:89] found id: ""
	I1213 11:55:17.653923  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.653934  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:17.653941  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:17.654004  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:17.730131  604010 cri.go:89] found id: ""
	I1213 11:55:17.730166  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.730175  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:17.730211  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:17.730290  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:17.764018  604010 cri.go:89] found id: ""
	I1213 11:55:17.764045  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.764053  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:17.764060  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:17.764139  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:17.789006  604010 cri.go:89] found id: ""
	I1213 11:55:17.789029  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.789039  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:17.789045  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:17.789110  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:17.820038  604010 cri.go:89] found id: ""
	I1213 11:55:17.820061  604010 logs.go:282] 0 containers: []
	W1213 11:55:17.820070  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:17.820080  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:17.820091  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:17.845672  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:17.845708  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:17.876520  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:17.876549  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:17.934113  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:17.934148  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:17.950852  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:17.950884  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:18.024225  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:18.014810    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.015320    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.017184    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.017872    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.019543    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:18.014810    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.015320    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.017184    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.017872    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:18.019543    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:20.526091  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:20.539006  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:20.539072  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:20.568228  604010 cri.go:89] found id: ""
	I1213 11:55:20.568252  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.568260  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:20.568266  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:20.568341  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:20.595603  604010 cri.go:89] found id: ""
	I1213 11:55:20.595632  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.595642  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:20.595648  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:20.595710  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:20.619697  604010 cri.go:89] found id: ""
	I1213 11:55:20.619723  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.619732  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:20.619739  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:20.619801  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:20.644480  604010 cri.go:89] found id: ""
	I1213 11:55:20.644507  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.644516  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:20.644523  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:20.644605  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:20.707263  604010 cri.go:89] found id: ""
	I1213 11:55:20.707286  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.707295  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:20.707301  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:20.707362  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:20.753734  604010 cri.go:89] found id: ""
	I1213 11:55:20.753758  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.753767  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:20.753773  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:20.753832  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:20.779244  604010 cri.go:89] found id: ""
	I1213 11:55:20.779267  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.779275  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:20.779282  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:20.779342  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:20.808050  604010 cri.go:89] found id: ""
	I1213 11:55:20.808127  604010 logs.go:282] 0 containers: []
	W1213 11:55:20.808144  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:20.808155  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:20.808167  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:20.863714  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:20.863751  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:20.879958  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:20.879988  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:20.947629  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:20.938365    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.939048    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.940693    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.941317    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.943088    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:20.938365    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.939048    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.940693    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.941317    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:20.943088    7395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:20.947653  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:20.947668  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:20.972884  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:20.972921  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:23.506189  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:23.517150  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:23.517220  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:23.544888  604010 cri.go:89] found id: ""
	I1213 11:55:23.544912  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.544920  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:23.544927  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:23.544992  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:23.571162  604010 cri.go:89] found id: ""
	I1213 11:55:23.571189  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.571197  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:23.571204  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:23.571288  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:23.596593  604010 cri.go:89] found id: ""
	I1213 11:55:23.596618  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.596626  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:23.596633  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:23.596693  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:23.622396  604010 cri.go:89] found id: ""
	I1213 11:55:23.622424  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.622433  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:23.622439  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:23.622541  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:23.648441  604010 cri.go:89] found id: ""
	I1213 11:55:23.648468  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.648478  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:23.648484  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:23.648552  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:23.698559  604010 cri.go:89] found id: ""
	I1213 11:55:23.698586  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.698595  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:23.698601  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:23.698664  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:23.749855  604010 cri.go:89] found id: ""
	I1213 11:55:23.749883  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.749893  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:23.749905  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:23.749964  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:23.781499  604010 cri.go:89] found id: ""
	I1213 11:55:23.781527  604010 logs.go:282] 0 containers: []
	W1213 11:55:23.781536  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:23.781547  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:23.781571  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:23.815145  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:23.815174  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:23.871093  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:23.871128  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:23.887427  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:23.887455  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:23.956327  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:23.948085    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.948683    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.950286    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.950824    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.952300    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:23.948085    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.948683    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.950286    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.950824    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:23.952300    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:23.956396  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:23.956417  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:26.482024  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:26.492511  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:26.492582  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:26.517699  604010 cri.go:89] found id: ""
	I1213 11:55:26.517777  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.517800  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:26.517818  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:26.517906  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:26.545138  604010 cri.go:89] found id: ""
	I1213 11:55:26.545207  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.545233  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:26.545251  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:26.545341  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:26.570019  604010 cri.go:89] found id: ""
	I1213 11:55:26.570090  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.570116  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:26.570134  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:26.570226  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:26.596752  604010 cri.go:89] found id: ""
	I1213 11:55:26.596831  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.596854  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:26.596869  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:26.596946  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:26.625280  604010 cri.go:89] found id: ""
	I1213 11:55:26.625306  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.625315  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:26.625322  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:26.625379  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:26.655489  604010 cri.go:89] found id: ""
	I1213 11:55:26.655513  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.655522  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:26.655528  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:26.655594  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:26.688001  604010 cri.go:89] found id: ""
	I1213 11:55:26.688028  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.688037  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:26.688043  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:26.688103  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:26.720200  604010 cri.go:89] found id: ""
	I1213 11:55:26.720226  604010 logs.go:282] 0 containers: []
	W1213 11:55:26.720235  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:26.720244  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:26.720255  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:26.751334  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:26.751368  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:26.791793  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:26.791819  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:26.847456  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:26.847493  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:26.864079  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:26.864109  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:26.927248  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:26.919337    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.920135    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.921687    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.921990    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.923429    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:26.919337    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.920135    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.921687    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.921990    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:26.923429    7630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:29.427521  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:29.438225  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:29.438297  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:29.463111  604010 cri.go:89] found id: ""
	I1213 11:55:29.463137  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.463146  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:29.463154  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:29.463222  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:29.488474  604010 cri.go:89] found id: ""
	I1213 11:55:29.488504  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.488513  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:29.488519  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:29.488580  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:29.514792  604010 cri.go:89] found id: ""
	I1213 11:55:29.514815  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.514824  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:29.514830  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:29.514890  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:29.540502  604010 cri.go:89] found id: ""
	I1213 11:55:29.540528  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.540537  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:29.540544  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:29.540623  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:29.569010  604010 cri.go:89] found id: ""
	I1213 11:55:29.569035  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.569044  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:29.569050  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:29.569143  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:29.597354  604010 cri.go:89] found id: ""
	I1213 11:55:29.597381  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.597390  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:29.597396  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:29.597482  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:29.622205  604010 cri.go:89] found id: ""
	I1213 11:55:29.622230  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.622239  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:29.622245  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:29.622321  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:29.649830  604010 cri.go:89] found id: ""
	I1213 11:55:29.649856  604010 logs.go:282] 0 containers: []
	W1213 11:55:29.649865  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:29.649874  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:29.649914  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:29.717017  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:29.717058  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:29.745372  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:29.745398  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:29.821563  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:29.813336    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.813962    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.815565    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.815994    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.817582    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:29.813336    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.813962    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.815565    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.815994    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:29.817582    7729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:29.821589  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:29.821603  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:29.847167  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:29.847206  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:32.379999  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:32.394044  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:32.394117  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:32.419725  604010 cri.go:89] found id: ""
	I1213 11:55:32.419751  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.419759  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:32.419767  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:32.419827  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:32.448514  604010 cri.go:89] found id: ""
	I1213 11:55:32.448537  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.448546  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:32.448552  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:32.448614  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:32.474220  604010 cri.go:89] found id: ""
	I1213 11:55:32.474257  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.474266  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:32.474272  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:32.474331  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:32.501945  604010 cri.go:89] found id: ""
	I1213 11:55:32.501970  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.501980  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:32.501987  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:32.502051  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:32.529117  604010 cri.go:89] found id: ""
	I1213 11:55:32.529143  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.529151  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:32.529159  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:32.529220  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:32.558516  604010 cri.go:89] found id: ""
	I1213 11:55:32.558545  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.558554  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:32.558563  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:32.558624  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:32.584351  604010 cri.go:89] found id: ""
	I1213 11:55:32.584375  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.584383  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:32.584390  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:32.584459  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:32.610180  604010 cri.go:89] found id: ""
	I1213 11:55:32.610203  604010 logs.go:282] 0 containers: []
	W1213 11:55:32.610212  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:32.610222  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:32.610233  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:32.668609  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:32.668647  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:32.687093  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:32.687199  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:32.806632  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:32.798550    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.799065    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.800667    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.801088    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.802806    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:32.798550    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.799065    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.800667    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.801088    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:32.802806    7842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:32.806658  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:32.806670  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:32.832549  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:32.832585  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:35.361963  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:35.372809  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:35.372881  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:35.398138  604010 cri.go:89] found id: ""
	I1213 11:55:35.398164  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.398172  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:35.398178  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:35.398238  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:35.423828  604010 cri.go:89] found id: ""
	I1213 11:55:35.423854  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.423863  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:35.423870  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:35.423934  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:35.453483  604010 cri.go:89] found id: ""
	I1213 11:55:35.453508  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.453518  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:35.453524  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:35.453617  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:35.478270  604010 cri.go:89] found id: ""
	I1213 11:55:35.478294  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.478303  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:35.478310  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:35.478373  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:35.508196  604010 cri.go:89] found id: ""
	I1213 11:55:35.508226  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.508235  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:35.508242  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:35.508327  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:35.537327  604010 cri.go:89] found id: ""
	I1213 11:55:35.537359  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.537369  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:35.537401  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:35.537490  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:35.564387  604010 cri.go:89] found id: ""
	I1213 11:55:35.564412  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.564420  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:35.564427  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:35.564483  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:35.589741  604010 cri.go:89] found id: ""
	I1213 11:55:35.589766  604010 logs.go:282] 0 containers: []
	W1213 11:55:35.589776  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:35.589787  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:35.589798  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:35.645240  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:35.645275  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:35.672440  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:35.672532  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:35.779839  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:35.770429    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.771175    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.772996    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.773416    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.775177    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:35.770429    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.771175    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.772996    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.773416    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:35.775177    7955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:35.779861  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:35.779874  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:35.804945  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:35.804983  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:38.336379  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:38.347209  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:38.347278  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:38.372679  604010 cri.go:89] found id: ""
	I1213 11:55:38.372706  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.372716  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:38.372723  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:38.372781  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:38.401308  604010 cri.go:89] found id: ""
	I1213 11:55:38.401340  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.401354  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:38.401360  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:38.401428  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:38.425990  604010 cri.go:89] found id: ""
	I1213 11:55:38.426025  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.426034  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:38.426040  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:38.426097  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:38.452858  604010 cri.go:89] found id: ""
	I1213 11:55:38.452884  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.452892  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:38.452900  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:38.452958  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:38.477766  604010 cri.go:89] found id: ""
	I1213 11:55:38.477791  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.477800  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:38.477807  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:38.477876  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:38.503003  604010 cri.go:89] found id: ""
	I1213 11:55:38.503028  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.503037  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:38.503043  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:38.503110  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:38.532923  604010 cri.go:89] found id: ""
	I1213 11:55:38.532946  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.532955  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:38.532962  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:38.533021  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:38.561367  604010 cri.go:89] found id: ""
	I1213 11:55:38.561389  604010 logs.go:282] 0 containers: []
	W1213 11:55:38.561397  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:38.561406  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:38.561425  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:38.627276  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:38.618551    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.619310    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.621183    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.621748    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.623328    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:38.618551    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.619310    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.621183    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.621748    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:38.623328    8054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:38.627341  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:38.627361  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:38.652980  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:38.653021  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:38.702202  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:38.702236  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:38.775658  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:38.775742  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:41.293324  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:41.304911  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:41.304988  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:41.329954  604010 cri.go:89] found id: ""
	I1213 11:55:41.329981  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.329990  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:41.329997  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:41.330068  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:41.356810  604010 cri.go:89] found id: ""
	I1213 11:55:41.356835  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.356845  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:41.356851  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:41.356911  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:41.382782  604010 cri.go:89] found id: ""
	I1213 11:55:41.382807  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.382816  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:41.382823  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:41.382882  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:41.411145  604010 cri.go:89] found id: ""
	I1213 11:55:41.411170  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.411179  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:41.411186  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:41.411242  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:41.439686  604010 cri.go:89] found id: ""
	I1213 11:55:41.439713  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.439722  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:41.439729  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:41.439797  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:41.463861  604010 cri.go:89] found id: ""
	I1213 11:55:41.463884  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.463893  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:41.463900  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:41.463958  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:41.488219  604010 cri.go:89] found id: ""
	I1213 11:55:41.488243  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.488252  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:41.488258  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:41.488339  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:41.513569  604010 cri.go:89] found id: ""
	I1213 11:55:41.513600  604010 logs.go:282] 0 containers: []
	W1213 11:55:41.513609  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:41.513619  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:41.513656  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:41.570549  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:41.570585  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:41.587559  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:41.587588  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:41.654460  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:41.646598    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.647136    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.648610    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.649143    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.650745    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:41.646598    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.647136    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.648610    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.649143    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:41.650745    8171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:41.654481  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:41.654494  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:41.679884  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:41.679918  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:44.238824  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:44.249658  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:44.249735  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:44.274262  604010 cri.go:89] found id: ""
	I1213 11:55:44.274287  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.274297  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:44.274303  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:44.274365  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:44.298725  604010 cri.go:89] found id: ""
	I1213 11:55:44.298750  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.298759  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:44.298765  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:44.298831  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:44.332989  604010 cri.go:89] found id: ""
	I1213 11:55:44.333019  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.333028  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:44.333035  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:44.333095  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:44.358205  604010 cri.go:89] found id: ""
	I1213 11:55:44.358229  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.358238  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:44.358250  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:44.358313  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:44.383989  604010 cri.go:89] found id: ""
	I1213 11:55:44.384017  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.384027  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:44.384034  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:44.384099  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:44.409651  604010 cri.go:89] found id: ""
	I1213 11:55:44.409677  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.409686  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:44.409692  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:44.409751  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:44.435253  604010 cri.go:89] found id: ""
	I1213 11:55:44.435280  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.435288  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:44.435295  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:44.435354  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:44.459342  604010 cri.go:89] found id: ""
	I1213 11:55:44.459379  604010 logs.go:282] 0 containers: []
	W1213 11:55:44.459388  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:44.459398  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:44.459409  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:44.527760  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:44.518804    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.519537    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.521331    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.521838    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.523375    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:44.518804    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.519537    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.521331    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.521838    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:44.523375    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:44.527781  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:44.527793  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:44.554052  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:44.554086  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:44.583553  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:44.583582  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:44.639690  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:44.639723  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:47.156860  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:47.167658  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:47.167728  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:47.191689  604010 cri.go:89] found id: ""
	I1213 11:55:47.191714  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.191723  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:47.191730  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:47.191790  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:47.217625  604010 cri.go:89] found id: ""
	I1213 11:55:47.217652  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.217665  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:47.217679  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:47.217756  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:47.246057  604010 cri.go:89] found id: ""
	I1213 11:55:47.246080  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.246088  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:47.246094  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:47.246153  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:47.272649  604010 cri.go:89] found id: ""
	I1213 11:55:47.272673  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.272682  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:47.272688  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:47.272747  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:47.297156  604010 cri.go:89] found id: ""
	I1213 11:55:47.297178  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.297186  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:47.297192  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:47.297249  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:47.321533  604010 cri.go:89] found id: ""
	I1213 11:55:47.321555  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.321563  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:47.321570  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:47.321647  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:47.347526  604010 cri.go:89] found id: ""
	I1213 11:55:47.347548  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.347558  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:47.347566  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:47.347743  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:47.373360  604010 cri.go:89] found id: ""
	I1213 11:55:47.373437  604010 logs.go:282] 0 containers: []
	W1213 11:55:47.373466  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:47.373491  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:47.373544  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:47.406388  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:47.406463  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:47.467132  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:47.467169  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:47.482951  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:47.482977  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:47.547530  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:47.538747    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.539246    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.540864    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.541466    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.543147    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:47.538747    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.539246    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.540864    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.541466    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:47.543147    8411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:47.547599  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:47.547625  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:50.076734  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:50.088146  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:50.088221  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:50.114846  604010 cri.go:89] found id: ""
	I1213 11:55:50.114871  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.114879  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:50.114885  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:50.114952  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:50.140346  604010 cri.go:89] found id: ""
	I1213 11:55:50.140383  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.140393  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:50.140400  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:50.140461  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:50.165612  604010 cri.go:89] found id: ""
	I1213 11:55:50.165647  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.165656  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:50.165663  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:50.165735  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:50.193167  604010 cri.go:89] found id: ""
	I1213 11:55:50.193196  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.193205  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:50.193211  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:50.193288  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:50.217552  604010 cri.go:89] found id: ""
	I1213 11:55:50.217602  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.217622  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:50.217630  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:50.217703  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:50.243207  604010 cri.go:89] found id: ""
	I1213 11:55:50.243230  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.243240  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:50.243246  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:50.243306  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:50.267889  604010 cri.go:89] found id: ""
	I1213 11:55:50.267961  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.267980  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:50.267988  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:50.268050  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:50.293393  604010 cri.go:89] found id: ""
	I1213 11:55:50.293420  604010 logs.go:282] 0 containers: []
	W1213 11:55:50.293429  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:50.293448  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:50.293461  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:50.358945  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:50.350414    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.351257    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.352886    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.353223    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.354777    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:50.350414    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.351257    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.352886    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.353223    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:50.354777    8508 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:50.358967  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:50.358982  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:50.384886  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:50.384922  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:50.416671  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:50.416697  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:50.472398  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:50.472437  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:52.988724  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:53.000673  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:53.000825  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:53.028787  604010 cri.go:89] found id: ""
	I1213 11:55:53.028812  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.028822  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:53.028829  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:53.028960  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:53.059024  604010 cri.go:89] found id: ""
	I1213 11:55:53.059060  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.059069  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:53.059076  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:53.059137  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:53.084415  604010 cri.go:89] found id: ""
	I1213 11:55:53.084443  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.084452  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:53.084459  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:53.084519  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:53.111367  604010 cri.go:89] found id: ""
	I1213 11:55:53.111402  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.111413  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:53.111420  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:53.111485  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:53.138948  604010 cri.go:89] found id: ""
	I1213 11:55:53.138973  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.138992  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:53.138999  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:53.139058  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:53.164317  604010 cri.go:89] found id: ""
	I1213 11:55:53.164341  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.164350  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:53.164363  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:53.164420  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:53.189237  604010 cri.go:89] found id: ""
	I1213 11:55:53.189263  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.189284  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:53.189291  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:53.189365  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:53.213792  604010 cri.go:89] found id: ""
	I1213 11:55:53.213831  604010 logs.go:282] 0 containers: []
	W1213 11:55:53.213840  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:53.213849  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:53.213864  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:53.268812  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:53.268852  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:53.284561  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:53.284592  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:53.350505  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:53.342240    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.342928    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.344529    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.345039    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.346717    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:53.342240    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.342928    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.344529    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.345039    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:53.346717    8626 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:53.350528  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:53.350540  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:53.375550  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:53.375586  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:55.903770  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:55.916528  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:55.916606  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:55.974216  604010 cri.go:89] found id: ""
	I1213 11:55:55.974238  604010 logs.go:282] 0 containers: []
	W1213 11:55:55.974246  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:55.974254  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:55.974316  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:56.009212  604010 cri.go:89] found id: ""
	I1213 11:55:56.009235  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.009243  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:56.009250  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:56.009308  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:56.036696  604010 cri.go:89] found id: ""
	I1213 11:55:56.036722  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.036731  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:56.036738  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:56.036821  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:56.062550  604010 cri.go:89] found id: ""
	I1213 11:55:56.062577  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.062586  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:56.062592  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:56.062649  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:56.087384  604010 cri.go:89] found id: ""
	I1213 11:55:56.087410  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.087419  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:56.087425  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:56.087506  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:56.113129  604010 cri.go:89] found id: ""
	I1213 11:55:56.113153  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.113164  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:56.113171  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:56.113234  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:56.137999  604010 cri.go:89] found id: ""
	I1213 11:55:56.138021  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.138030  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:56.138036  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:56.138094  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:56.164815  604010 cri.go:89] found id: ""
	I1213 11:55:56.164841  604010 logs.go:282] 0 containers: []
	W1213 11:55:56.164851  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:56.164861  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:56.164872  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:56.190007  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:56.190042  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:56.222068  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:56.222097  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:56.277067  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:56.277104  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:56.293465  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:56.293495  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:56.360755  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:56.351282    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.352626    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.353483    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.354403    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.356173    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:56.351282    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.352626    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.353483    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.354403    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:56.356173    8753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:55:58.861486  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:55:58.872284  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:55:58.872365  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:55:58.898051  604010 cri.go:89] found id: ""
	I1213 11:55:58.898077  604010 logs.go:282] 0 containers: []
	W1213 11:55:58.898086  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:55:58.898093  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:55:58.898152  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:55:58.937804  604010 cri.go:89] found id: ""
	I1213 11:55:58.937834  604010 logs.go:282] 0 containers: []
	W1213 11:55:58.937852  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:55:58.937865  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:55:58.937957  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:55:58.987256  604010 cri.go:89] found id: ""
	I1213 11:55:58.987290  604010 logs.go:282] 0 containers: []
	W1213 11:55:58.987301  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:55:58.987308  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:55:58.987378  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:55:59.018252  604010 cri.go:89] found id: ""
	I1213 11:55:59.018274  604010 logs.go:282] 0 containers: []
	W1213 11:55:59.018282  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:55:59.018289  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:55:59.018350  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:55:59.046993  604010 cri.go:89] found id: ""
	I1213 11:55:59.047018  604010 logs.go:282] 0 containers: []
	W1213 11:55:59.047027  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:55:59.047033  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:55:59.047089  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:55:59.072813  604010 cri.go:89] found id: ""
	I1213 11:55:59.072888  604010 logs.go:282] 0 containers: []
	W1213 11:55:59.072903  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:55:59.072913  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:55:59.072988  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:55:59.097766  604010 cri.go:89] found id: ""
	I1213 11:55:59.097792  604010 logs.go:282] 0 containers: []
	W1213 11:55:59.097801  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:55:59.097808  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:55:59.097868  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:55:59.125013  604010 cri.go:89] found id: ""
	I1213 11:55:59.125038  604010 logs.go:282] 0 containers: []
	W1213 11:55:59.125047  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:55:59.125056  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:55:59.125070  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:55:59.150130  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:55:59.150164  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:55:59.178033  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:55:59.178107  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:55:59.233761  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:55:59.233795  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:55:59.249736  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:55:59.249772  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:55:59.314577  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:55:59.305285    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.306134    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.307637    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.308126    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.310000    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:55:59.305285    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.306134    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.307637    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.308126    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:55:59.310000    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:01.814837  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:01.826268  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:01.826352  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:01.856935  604010 cri.go:89] found id: ""
	I1213 11:56:01.856960  604010 logs.go:282] 0 containers: []
	W1213 11:56:01.856969  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:01.856979  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:01.857039  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:01.884429  604010 cri.go:89] found id: ""
	I1213 11:56:01.884454  604010 logs.go:282] 0 containers: []
	W1213 11:56:01.884463  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:01.884470  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:01.884530  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:01.929790  604010 cri.go:89] found id: ""
	I1213 11:56:01.929812  604010 logs.go:282] 0 containers: []
	W1213 11:56:01.929821  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:01.929828  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:01.929890  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:01.997657  604010 cri.go:89] found id: ""
	I1213 11:56:01.997686  604010 logs.go:282] 0 containers: []
	W1213 11:56:01.997703  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:01.997713  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:01.997785  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:02.027667  604010 cri.go:89] found id: ""
	I1213 11:56:02.027692  604010 logs.go:282] 0 containers: []
	W1213 11:56:02.027701  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:02.027707  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:02.027770  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:02.052911  604010 cri.go:89] found id: ""
	I1213 11:56:02.052935  604010 logs.go:282] 0 containers: []
	W1213 11:56:02.052944  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:02.052950  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:02.053009  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:02.078744  604010 cri.go:89] found id: ""
	I1213 11:56:02.078813  604010 logs.go:282] 0 containers: []
	W1213 11:56:02.078839  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:02.078857  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:02.078946  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:02.104065  604010 cri.go:89] found id: ""
	I1213 11:56:02.104136  604010 logs.go:282] 0 containers: []
	W1213 11:56:02.104158  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:02.104181  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:02.104219  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:02.177602  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:02.166576    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.167162    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.170937    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.171543    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.173272    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:02.166576    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.167162    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.170937    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.171543    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:02.173272    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:02.177623  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:02.177635  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:02.203025  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:02.203064  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:02.232249  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:02.232275  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:02.288746  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:02.288781  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:04.806667  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:04.817452  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:04.817526  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:04.843671  604010 cri.go:89] found id: ""
	I1213 11:56:04.843696  604010 logs.go:282] 0 containers: []
	W1213 11:56:04.843705  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:04.843712  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:04.843770  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:04.869847  604010 cri.go:89] found id: ""
	I1213 11:56:04.869873  604010 logs.go:282] 0 containers: []
	W1213 11:56:04.869882  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:04.869889  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:04.869949  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:04.895727  604010 cri.go:89] found id: ""
	I1213 11:56:04.895750  604010 logs.go:282] 0 containers: []
	W1213 11:56:04.895759  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:04.895766  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:04.895874  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:04.958057  604010 cri.go:89] found id: ""
	I1213 11:56:04.958083  604010 logs.go:282] 0 containers: []
	W1213 11:56:04.958093  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:04.958102  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:04.958164  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:05.011151  604010 cri.go:89] found id: ""
	I1213 11:56:05.011180  604010 logs.go:282] 0 containers: []
	W1213 11:56:05.011191  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:05.011198  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:05.011301  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:05.042226  604010 cri.go:89] found id: ""
	I1213 11:56:05.042257  604010 logs.go:282] 0 containers: []
	W1213 11:56:05.042267  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:05.042274  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:05.042344  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:05.067033  604010 cri.go:89] found id: ""
	I1213 11:56:05.067057  604010 logs.go:282] 0 containers: []
	W1213 11:56:05.067066  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:05.067073  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:05.067137  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:05.092704  604010 cri.go:89] found id: ""
	I1213 11:56:05.092729  604010 logs.go:282] 0 containers: []
	W1213 11:56:05.092740  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:05.092751  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:05.092789  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:05.149091  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:05.149142  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:05.165497  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:05.165536  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:05.234289  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:05.225131    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.225892    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.227653    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.228318    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.230170    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:05.225131    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.225892    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.227653    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.228318    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:05.230170    9076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:05.234313  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:05.234326  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:05.259839  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:05.259877  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:07.795276  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:07.805797  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:07.805865  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:07.833431  604010 cri.go:89] found id: ""
	I1213 11:56:07.833458  604010 logs.go:282] 0 containers: []
	W1213 11:56:07.833467  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:07.833474  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:07.833533  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:07.859570  604010 cri.go:89] found id: ""
	I1213 11:56:07.859596  604010 logs.go:282] 0 containers: []
	W1213 11:56:07.859605  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:07.859612  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:07.859680  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:07.885597  604010 cri.go:89] found id: ""
	I1213 11:56:07.885621  604010 logs.go:282] 0 containers: []
	W1213 11:56:07.885630  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:07.885636  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:07.885693  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:07.932272  604010 cri.go:89] found id: ""
	I1213 11:56:07.932295  604010 logs.go:282] 0 containers: []
	W1213 11:56:07.932304  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:07.932311  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:07.932368  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:07.971123  604010 cri.go:89] found id: ""
	I1213 11:56:07.971146  604010 logs.go:282] 0 containers: []
	W1213 11:56:07.971156  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:07.971162  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:07.971223  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:08.020370  604010 cri.go:89] found id: ""
	I1213 11:56:08.020442  604010 logs.go:282] 0 containers: []
	W1213 11:56:08.020470  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:08.020488  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:08.020576  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:08.050772  604010 cri.go:89] found id: ""
	I1213 11:56:08.050843  604010 logs.go:282] 0 containers: []
	W1213 11:56:08.050870  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:08.050888  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:08.050977  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:08.076860  604010 cri.go:89] found id: ""
	I1213 11:56:08.076891  604010 logs.go:282] 0 containers: []
	W1213 11:56:08.076901  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:08.076911  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:08.076923  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:08.136737  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:08.136772  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:08.152700  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:08.152856  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:08.216955  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:08.208521    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.209263    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.210851    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.211330    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.212940    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:08.208521    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.209263    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.210851    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.211330    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:08.212940    9190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:08.217027  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:08.217055  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:08.242524  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:08.242562  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:10.774825  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:10.785504  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:10.785573  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:10.812402  604010 cri.go:89] found id: ""
	I1213 11:56:10.812424  604010 logs.go:282] 0 containers: []
	W1213 11:56:10.812433  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:10.812440  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:10.812495  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:10.837362  604010 cri.go:89] found id: ""
	I1213 11:56:10.837387  604010 logs.go:282] 0 containers: []
	W1213 11:56:10.837396  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:10.837402  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:10.837461  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:10.862348  604010 cri.go:89] found id: ""
	I1213 11:56:10.862374  604010 logs.go:282] 0 containers: []
	W1213 11:56:10.862382  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:10.862389  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:10.862447  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:10.886922  604010 cri.go:89] found id: ""
	I1213 11:56:10.886999  604010 logs.go:282] 0 containers: []
	W1213 11:56:10.887020  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:10.887038  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:10.887121  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:10.931278  604010 cri.go:89] found id: ""
	I1213 11:56:10.931347  604010 logs.go:282] 0 containers: []
	W1213 11:56:10.931369  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:10.931387  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:10.931475  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:10.974160  604010 cri.go:89] found id: ""
	I1213 11:56:10.974226  604010 logs.go:282] 0 containers: []
	W1213 11:56:10.974254  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:10.974272  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:10.974357  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:11.010218  604010 cri.go:89] found id: ""
	I1213 11:56:11.010290  604010 logs.go:282] 0 containers: []
	W1213 11:56:11.010313  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:11.010332  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:11.010424  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:11.039062  604010 cri.go:89] found id: ""
	I1213 11:56:11.039097  604010 logs.go:282] 0 containers: []
	W1213 11:56:11.039108  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:11.039118  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:11.039130  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:11.095996  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:11.096035  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:11.112552  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:11.112583  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:11.181416  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:11.172048    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.172697    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.174491    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.175376    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.177169    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:11.172048    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.172697    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.174491    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.175376    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:11.177169    9301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:11.181436  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:11.181451  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:11.206963  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:11.207000  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:13.739447  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:13.750286  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:13.750359  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:13.776350  604010 cri.go:89] found id: ""
	I1213 11:56:13.776379  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.776388  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:13.776395  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:13.776460  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:13.800680  604010 cri.go:89] found id: ""
	I1213 11:56:13.800705  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.800714  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:13.800721  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:13.800780  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:13.826000  604010 cri.go:89] found id: ""
	I1213 11:56:13.826038  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.826050  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:13.826072  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:13.826155  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:13.850538  604010 cri.go:89] found id: ""
	I1213 11:56:13.850564  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.850582  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:13.850611  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:13.850706  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:13.879462  604010 cri.go:89] found id: ""
	I1213 11:56:13.879488  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.879496  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:13.879503  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:13.879559  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:13.904388  604010 cri.go:89] found id: ""
	I1213 11:56:13.904414  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.904422  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:13.904432  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:13.904488  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:13.936193  604010 cri.go:89] found id: ""
	I1213 11:56:13.936221  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.936229  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:13.936236  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:13.936304  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:13.979520  604010 cri.go:89] found id: ""
	I1213 11:56:13.979547  604010 logs.go:282] 0 containers: []
	W1213 11:56:13.979556  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:13.979566  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:13.979577  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:14.047872  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:14.047909  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:14.064531  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:14.064559  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:14.132145  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:14.123439    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.124184    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.125827    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.126337    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.128067    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:14.123439    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.124184    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.125827    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.126337    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:14.128067    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:14.132167  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:14.132180  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:14.158143  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:14.158181  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:16.686213  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:16.696766  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:16.696836  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:16.720811  604010 cri.go:89] found id: ""
	I1213 11:56:16.720840  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.720849  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:16.720856  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:16.720916  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:16.746135  604010 cri.go:89] found id: ""
	I1213 11:56:16.746162  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.746170  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:16.746177  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:16.746235  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:16.772135  604010 cri.go:89] found id: ""
	I1213 11:56:16.772162  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.772171  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:16.772177  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:16.772263  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:16.801712  604010 cri.go:89] found id: ""
	I1213 11:56:16.801738  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.801748  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:16.801754  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:16.801813  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:16.825625  604010 cri.go:89] found id: ""
	I1213 11:56:16.825649  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.825658  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:16.825664  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:16.825723  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:16.850464  604010 cri.go:89] found id: ""
	I1213 11:56:16.850490  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.850498  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:16.850505  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:16.850561  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:16.882804  604010 cri.go:89] found id: ""
	I1213 11:56:16.882826  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.882835  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:16.882848  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:16.882906  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:16.908046  604010 cri.go:89] found id: ""
	I1213 11:56:16.908071  604010 logs.go:282] 0 containers: []
	W1213 11:56:16.908080  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:16.908090  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:16.908104  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:17.008503  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:17.008590  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:17.024851  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:17.024884  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:17.092834  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:17.083994    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.084849    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.086559    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.087267    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.088871    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:17.083994    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.084849    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.086559    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.087267    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:17.088871    9524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:17.092854  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:17.092867  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:17.118299  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:17.118334  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:19.647201  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:19.658196  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:19.658313  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:19.681845  604010 cri.go:89] found id: ""
	I1213 11:56:19.681924  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.681947  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:19.681966  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:19.682053  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:19.707693  604010 cri.go:89] found id: ""
	I1213 11:56:19.707717  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.707727  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:19.707733  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:19.707809  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:19.732762  604010 cri.go:89] found id: ""
	I1213 11:56:19.732788  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.732797  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:19.732804  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:19.732884  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:19.757359  604010 cri.go:89] found id: ""
	I1213 11:56:19.757393  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.757402  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:19.757423  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:19.757500  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:19.785446  604010 cri.go:89] found id: ""
	I1213 11:56:19.785473  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.785482  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:19.785489  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:19.785610  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:19.812583  604010 cri.go:89] found id: ""
	I1213 11:56:19.812607  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.812616  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:19.812623  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:19.812681  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:19.836875  604010 cri.go:89] found id: ""
	I1213 11:56:19.836901  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.836910  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:19.836919  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:19.837022  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:19.861557  604010 cri.go:89] found id: ""
	I1213 11:56:19.861584  604010 logs.go:282] 0 containers: []
	W1213 11:56:19.861595  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:19.861610  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:19.861631  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:19.920472  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:19.920510  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:19.973429  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:19.973459  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:20.062908  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:20.053401    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.054064    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.055967    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.056677    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.058665    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:20.053401    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.054064    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.055967    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.056677    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:20.058665    9639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:20.062932  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:20.062945  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:20.089847  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:20.089889  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:22.621952  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:22.633355  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:22.633434  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:22.661131  604010 cri.go:89] found id: ""
	I1213 11:56:22.661156  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.661165  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:22.661172  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:22.661231  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:22.687274  604010 cri.go:89] found id: ""
	I1213 11:56:22.687309  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.687319  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:22.687325  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:22.687385  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:22.712134  604010 cri.go:89] found id: ""
	I1213 11:56:22.712162  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.712177  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:22.712184  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:22.712243  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:22.737658  604010 cri.go:89] found id: ""
	I1213 11:56:22.737684  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.737693  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:22.737699  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:22.737756  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:22.762933  604010 cri.go:89] found id: ""
	I1213 11:56:22.762958  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.762966  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:22.762973  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:22.763030  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:22.787428  604010 cri.go:89] found id: ""
	I1213 11:56:22.787453  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.787463  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:22.787469  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:22.787531  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:22.812716  604010 cri.go:89] found id: ""
	I1213 11:56:22.812746  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.812754  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:22.812761  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:22.812849  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:22.837817  604010 cri.go:89] found id: ""
	I1213 11:56:22.837844  604010 logs.go:282] 0 containers: []
	W1213 11:56:22.837853  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:22.837863  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:22.837883  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:22.893260  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:22.893294  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:22.917278  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:22.917388  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:23.026082  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:23.017267    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.017959    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.019734    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.020131    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.021757    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:23.017267    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.017959    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.019734    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.020131    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:23.021757    9756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:23.026106  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:23.026120  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:23.052026  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:23.052065  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:25.580545  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:25.591333  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:25.591403  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:25.616731  604010 cri.go:89] found id: ""
	I1213 11:56:25.616754  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.616764  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:25.616771  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:25.616827  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:25.646111  604010 cri.go:89] found id: ""
	I1213 11:56:25.646135  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.646144  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:25.646151  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:25.646212  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:25.674261  604010 cri.go:89] found id: ""
	I1213 11:56:25.674284  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.674293  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:25.674300  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:25.674358  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:25.700613  604010 cri.go:89] found id: ""
	I1213 11:56:25.700636  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.700644  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:25.700650  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:25.700707  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:25.728704  604010 cri.go:89] found id: ""
	I1213 11:56:25.728789  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.728805  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:25.728818  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:25.728885  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:25.761516  604010 cri.go:89] found id: ""
	I1213 11:56:25.761538  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.761548  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:25.761555  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:25.761635  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:25.786867  604010 cri.go:89] found id: ""
	I1213 11:56:25.786895  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.786905  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:25.786911  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:25.786970  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:25.811462  604010 cri.go:89] found id: ""
	I1213 11:56:25.811485  604010 logs.go:282] 0 containers: []
	W1213 11:56:25.811493  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:25.811503  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:25.811514  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:25.866924  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:25.866955  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:25.883500  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:25.883530  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:25.977779  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:25.966190    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.967164    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.969705    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.971514    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.972246    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:25.966190    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.967164    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.969705    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.971514    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:25.972246    9862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:25.977806  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:25.977819  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:26.009949  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:26.010030  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:28.542187  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:28.552481  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:28.552607  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:28.581578  604010 cri.go:89] found id: ""
	I1213 11:56:28.581611  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.581627  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:28.581634  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:28.581690  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:28.607125  604010 cri.go:89] found id: ""
	I1213 11:56:28.607149  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.607157  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:28.607163  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:28.607220  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:28.632720  604010 cri.go:89] found id: ""
	I1213 11:56:28.632747  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.632758  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:28.632765  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:28.632822  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:28.658222  604010 cri.go:89] found id: ""
	I1213 11:56:28.658251  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.658260  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:28.658267  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:28.658325  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:28.682387  604010 cri.go:89] found id: ""
	I1213 11:56:28.682425  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.682436  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:28.682443  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:28.682519  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:28.707965  604010 cri.go:89] found id: ""
	I1213 11:56:28.708001  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.708011  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:28.708024  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:28.708094  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:28.737087  604010 cri.go:89] found id: ""
	I1213 11:56:28.737115  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.737124  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:28.737130  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:28.737189  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:28.761982  604010 cri.go:89] found id: ""
	I1213 11:56:28.762059  604010 logs.go:282] 0 containers: []
	W1213 11:56:28.762081  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:28.762108  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:28.762148  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:28.817649  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:28.817687  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:28.833874  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:28.833904  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:28.901287  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:28.892846    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.893499    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.895107    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.895608    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.897226    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:28.892846    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.893499    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.895107    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.895608    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:28.897226    9975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:28.901308  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:28.901319  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:28.943036  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:28.943114  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:31.504085  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:31.516702  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:31.516776  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:31.541829  604010 cri.go:89] found id: ""
	I1213 11:56:31.541852  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.541861  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:31.541868  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:31.541927  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:31.567128  604010 cri.go:89] found id: ""
	I1213 11:56:31.567153  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.567162  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:31.567169  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:31.567228  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:31.592889  604010 cri.go:89] found id: ""
	I1213 11:56:31.592914  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.592924  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:31.592931  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:31.592988  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:31.620810  604010 cri.go:89] found id: ""
	I1213 11:56:31.620834  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.620843  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:31.620850  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:31.620907  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:31.645931  604010 cri.go:89] found id: ""
	I1213 11:56:31.645958  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.645968  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:31.645975  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:31.646034  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:31.671037  604010 cri.go:89] found id: ""
	I1213 11:56:31.671065  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.671074  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:31.671116  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:31.671180  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:31.696779  604010 cri.go:89] found id: ""
	I1213 11:56:31.696805  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.696814  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:31.696820  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:31.696886  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:31.721074  604010 cri.go:89] found id: ""
	I1213 11:56:31.721152  604010 logs.go:282] 0 containers: []
	W1213 11:56:31.721175  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:31.721198  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:31.721238  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:31.776685  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:31.776720  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:31.793212  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:31.793241  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:31.856954  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:31.848666   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.849288   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.850793   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.851220   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.852660   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:31.848666   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.849288   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.850793   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.851220   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:31.852660   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:31.857017  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:31.857044  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:31.882038  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:31.882070  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:34.425618  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:34.436018  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:34.436163  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:34.460322  604010 cri.go:89] found id: ""
	I1213 11:56:34.460347  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.460356  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:34.460362  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:34.460442  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:34.484514  604010 cri.go:89] found id: ""
	I1213 11:56:34.484582  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.484607  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:34.484622  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:34.484695  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:34.513969  604010 cri.go:89] found id: ""
	I1213 11:56:34.514006  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.514016  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:34.514023  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:34.514089  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:34.541219  604010 cri.go:89] found id: ""
	I1213 11:56:34.541245  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.541254  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:34.541260  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:34.541323  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:34.570631  604010 cri.go:89] found id: ""
	I1213 11:56:34.570653  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.570662  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:34.570668  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:34.570749  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:34.594597  604010 cri.go:89] found id: ""
	I1213 11:56:34.594636  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.594645  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:34.594651  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:34.594741  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:34.618131  604010 cri.go:89] found id: ""
	I1213 11:56:34.618159  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.618168  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:34.618174  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:34.618230  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:34.645177  604010 cri.go:89] found id: ""
	I1213 11:56:34.645204  604010 logs.go:282] 0 containers: []
	W1213 11:56:34.645213  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:34.645223  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:34.645235  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:34.674203  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:34.674235  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:34.731298  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:34.731332  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:34.747591  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:34.747623  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:34.811066  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:34.802515   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.803209   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.804716   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.805051   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.806504   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:34.802515   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.803209   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.804716   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.805051   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:34.806504   10213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:34.811137  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:34.811171  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:37.342058  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:37.352580  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:37.352649  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:37.376663  604010 cri.go:89] found id: ""
	I1213 11:56:37.376689  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.376698  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:37.376704  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:37.376763  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:37.400694  604010 cri.go:89] found id: ""
	I1213 11:56:37.400720  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.400728  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:37.400735  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:37.400796  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:37.425687  604010 cri.go:89] found id: ""
	I1213 11:56:37.425715  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.425724  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:37.425730  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:37.425787  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:37.450160  604010 cri.go:89] found id: ""
	I1213 11:56:37.450189  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.450198  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:37.450205  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:37.450266  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:37.475110  604010 cri.go:89] found id: ""
	I1213 11:56:37.475133  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.475142  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:37.475149  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:37.475207  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:37.499102  604010 cri.go:89] found id: ""
	I1213 11:56:37.499171  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.499196  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:37.499207  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:37.499282  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:37.528584  604010 cri.go:89] found id: ""
	I1213 11:56:37.528609  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.528618  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:37.528624  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:37.528708  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:37.554175  604010 cri.go:89] found id: ""
	I1213 11:56:37.554259  604010 logs.go:282] 0 containers: []
	W1213 11:56:37.554283  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:37.554304  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:37.554347  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:37.612670  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:37.612706  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:37.629187  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:37.629218  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:37.694612  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:37.685617   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.686619   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.688268   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.688681   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.690407   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:37.685617   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.686619   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.688268   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.688681   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:37.690407   10314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:37.694640  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:37.694653  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:37.719952  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:37.719988  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:40.252201  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:40.265281  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:40.265368  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:40.289761  604010 cri.go:89] found id: ""
	I1213 11:56:40.289841  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.289865  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:40.289885  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:40.289969  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:40.314886  604010 cri.go:89] found id: ""
	I1213 11:56:40.314911  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.314920  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:40.314928  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:40.314988  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:40.340433  604010 cri.go:89] found id: ""
	I1213 11:56:40.340460  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.340469  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:40.340475  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:40.340535  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:40.369630  604010 cri.go:89] found id: ""
	I1213 11:56:40.369657  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.369666  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:40.369672  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:40.369730  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:40.396456  604010 cri.go:89] found id: ""
	I1213 11:56:40.396480  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.396489  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:40.396495  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:40.396550  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:40.420915  604010 cri.go:89] found id: ""
	I1213 11:56:40.420982  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.420996  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:40.421004  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:40.421067  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:40.445305  604010 cri.go:89] found id: ""
	I1213 11:56:40.445339  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.445349  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:40.445355  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:40.445423  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:40.470359  604010 cri.go:89] found id: ""
	I1213 11:56:40.470396  604010 logs.go:282] 0 containers: []
	W1213 11:56:40.470406  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:40.470415  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:40.470428  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:40.529991  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:40.530029  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:40.545704  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:40.545785  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:40.614385  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:40.605002   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.605654   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.608020   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.608670   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.609867   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:40.605002   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.605654   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.608020   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.608670   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:40.609867   10428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:40.614411  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:40.614423  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:40.640189  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:40.640226  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:43.171206  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:43.187532  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:43.187604  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:43.255773  604010 cri.go:89] found id: ""
	I1213 11:56:43.255816  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.255826  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:43.255833  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:43.255893  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:43.282066  604010 cri.go:89] found id: ""
	I1213 11:56:43.282095  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.282104  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:43.282110  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:43.282169  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:43.307994  604010 cri.go:89] found id: ""
	I1213 11:56:43.308022  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.308031  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:43.308037  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:43.308094  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:43.333649  604010 cri.go:89] found id: ""
	I1213 11:56:43.333682  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.333692  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:43.333699  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:43.333761  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:43.364007  604010 cri.go:89] found id: ""
	I1213 11:56:43.364037  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.364045  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:43.364052  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:43.364110  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:43.389343  604010 cri.go:89] found id: ""
	I1213 11:56:43.389381  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.389389  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:43.389396  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:43.389466  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:43.414572  604010 cri.go:89] found id: ""
	I1213 11:56:43.414608  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.414618  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:43.414624  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:43.414711  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:43.439971  604010 cri.go:89] found id: ""
	I1213 11:56:43.439999  604010 logs.go:282] 0 containers: []
	W1213 11:56:43.440008  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:43.440018  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:43.440034  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:43.455350  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:43.455380  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:43.518971  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:43.510133   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.510875   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.512575   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.513204   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.514989   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:43.510133   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.510875   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.512575   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.513204   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:43.514989   10540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:43.519004  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:43.519017  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:43.543826  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:43.543863  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:43.571534  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:43.571561  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:46.127908  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:46.138548  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:46.138627  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:46.177176  604010 cri.go:89] found id: ""
	I1213 11:56:46.177205  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.177214  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:46.177220  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:46.177280  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:46.250872  604010 cri.go:89] found id: ""
	I1213 11:56:46.250897  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.250906  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:46.250913  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:46.250972  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:46.276982  604010 cri.go:89] found id: ""
	I1213 11:56:46.277008  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.277020  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:46.277026  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:46.277086  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:46.308722  604010 cri.go:89] found id: ""
	I1213 11:56:46.308745  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.308754  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:46.308760  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:46.308819  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:46.333457  604010 cri.go:89] found id: ""
	I1213 11:56:46.333479  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.333488  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:46.333495  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:46.333551  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:46.361010  604010 cri.go:89] found id: ""
	I1213 11:56:46.361034  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.361042  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:46.361049  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:46.361107  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:46.385580  604010 cri.go:89] found id: ""
	I1213 11:56:46.385608  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.385625  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:46.385631  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:46.385689  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:46.410013  604010 cri.go:89] found id: ""
	I1213 11:56:46.410041  604010 logs.go:282] 0 containers: []
	W1213 11:56:46.410050  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:46.410059  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:46.410071  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:46.474489  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:46.465232   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.465851   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.467612   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.468248   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.469990   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:46.465232   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.465851   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.467612   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.468248   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:46.469990   10652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:46.474512  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:46.474525  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:46.499926  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:46.499961  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:46.529519  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:46.529543  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:46.585780  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:46.585816  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:49.102338  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:49.113041  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:49.113164  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:49.137484  604010 cri.go:89] found id: ""
	I1213 11:56:49.137527  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.137536  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:49.137543  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:49.137633  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:49.176305  604010 cri.go:89] found id: ""
	I1213 11:56:49.176345  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.176354  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:49.176360  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:49.176445  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:49.216965  604010 cri.go:89] found id: ""
	I1213 11:56:49.216992  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.217001  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:49.217007  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:49.217076  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:49.262147  604010 cri.go:89] found id: ""
	I1213 11:56:49.262226  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.262256  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:49.262277  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:49.262367  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:49.292097  604010 cri.go:89] found id: ""
	I1213 11:56:49.292124  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.292133  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:49.292140  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:49.292195  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:49.316193  604010 cri.go:89] found id: ""
	I1213 11:56:49.316219  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.316228  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:49.316235  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:49.316293  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:49.341385  604010 cri.go:89] found id: ""
	I1213 11:56:49.341411  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.341421  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:49.341434  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:49.341503  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:49.365851  604010 cri.go:89] found id: ""
	I1213 11:56:49.365874  604010 logs.go:282] 0 containers: []
	W1213 11:56:49.365883  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:49.365892  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:49.365903  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:49.381508  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:49.381537  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:49.444383  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:49.436163   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.436758   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.438415   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.438958   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.440549   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:49.436163   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.436758   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.438415   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.438958   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:49.440549   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:49.444406  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:49.444419  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:49.469593  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:49.469636  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:49.497881  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:49.497912  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:52.053968  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:52.065301  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:52.065418  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:52.096894  604010 cri.go:89] found id: ""
	I1213 11:56:52.096966  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.096988  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:52.097007  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:52.097097  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:52.124148  604010 cri.go:89] found id: ""
	I1213 11:56:52.124173  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.124186  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:52.124193  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:52.124306  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:52.160416  604010 cri.go:89] found id: ""
	I1213 11:56:52.160439  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.160448  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:52.160455  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:52.160513  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:52.200069  604010 cri.go:89] found id: ""
	I1213 11:56:52.200095  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.200104  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:52.200111  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:52.200174  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:52.263224  604010 cri.go:89] found id: ""
	I1213 11:56:52.263295  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.263310  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:52.263318  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:52.263375  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:52.288649  604010 cri.go:89] found id: ""
	I1213 11:56:52.288675  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.288684  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:52.288691  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:52.288754  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:52.316561  604010 cri.go:89] found id: ""
	I1213 11:56:52.316588  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.316596  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:52.316603  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:52.316660  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:52.341885  604010 cri.go:89] found id: ""
	I1213 11:56:52.341909  604010 logs.go:282] 0 containers: []
	W1213 11:56:52.341918  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:52.341927  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:52.341938  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:52.397001  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:52.397038  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:52.415607  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:52.415635  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:52.493248  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:52.484194   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.484676   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.486433   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.486904   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.488650   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:52.484194   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.484676   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.486433   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.486904   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:52.488650   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:52.493274  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:52.493288  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:52.518551  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:52.518588  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:55.047907  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:55.059302  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:55.059421  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:55.085237  604010 cri.go:89] found id: ""
	I1213 11:56:55.085271  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.085281  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:55.085288  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:55.085362  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:55.112434  604010 cri.go:89] found id: ""
	I1213 11:56:55.112462  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.112475  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:55.112482  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:55.112544  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:55.138067  604010 cri.go:89] found id: ""
	I1213 11:56:55.138101  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.138110  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:55.138117  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:55.138184  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:55.179401  604010 cri.go:89] found id: ""
	I1213 11:56:55.179522  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.179548  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:55.179588  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:55.179766  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:55.234369  604010 cri.go:89] found id: ""
	I1213 11:56:55.234462  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.234499  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:55.234544  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:55.234676  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:55.277189  604010 cri.go:89] found id: ""
	I1213 11:56:55.277271  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.277294  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:55.277314  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:55.277416  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:55.310856  604010 cri.go:89] found id: ""
	I1213 11:56:55.310933  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.310949  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:55.310958  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:55.311020  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:55.337357  604010 cri.go:89] found id: ""
	I1213 11:56:55.337453  604010 logs.go:282] 0 containers: []
	W1213 11:56:55.337468  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:55.337478  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:55.337490  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:55.392569  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:55.392607  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:55.408576  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:55.408608  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:55.471726  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:55.463854   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.464422   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.465928   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.466440   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.467966   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:55.463854   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.464422   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.465928   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.466440   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:55.467966   10996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:55.471749  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:55.471762  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:55.497230  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:55.497266  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:56:58.026521  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:56:58.040495  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:56:58.040579  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:56:58.067542  604010 cri.go:89] found id: ""
	I1213 11:56:58.067567  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.067576  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:56:58.067583  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:56:58.067649  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:56:58.092616  604010 cri.go:89] found id: ""
	I1213 11:56:58.092642  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.092651  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:56:58.092657  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:56:58.092714  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:56:58.117533  604010 cri.go:89] found id: ""
	I1213 11:56:58.117561  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.117572  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:56:58.117578  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:56:58.117669  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:56:58.143441  604010 cri.go:89] found id: ""
	I1213 11:56:58.143465  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.143474  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:56:58.143481  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:56:58.143540  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:56:58.191063  604010 cri.go:89] found id: ""
	I1213 11:56:58.191086  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.191096  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:56:58.191102  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:56:58.191175  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:56:58.233666  604010 cri.go:89] found id: ""
	I1213 11:56:58.233709  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.233727  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:56:58.233734  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:56:58.233805  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:56:58.285997  604010 cri.go:89] found id: ""
	I1213 11:56:58.286020  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.286029  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:56:58.286035  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:56:58.286099  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:56:58.313519  604010 cri.go:89] found id: ""
	I1213 11:56:58.313544  604010 logs.go:282] 0 containers: []
	W1213 11:56:58.313553  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:56:58.313570  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:56:58.313581  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:56:58.372174  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:56:58.372208  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:56:58.387775  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:56:58.387803  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:56:58.457676  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:56:58.448571   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.449279   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.451118   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.451644   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.453241   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:56:58.448571   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.449279   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.451118   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.451644   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:56:58.453241   11107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:56:58.457698  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:56:58.457711  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:56:58.482922  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:56:58.482956  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:01.016291  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:01.027467  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:01.027540  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:01.061002  604010 cri.go:89] found id: ""
	I1213 11:57:01.061026  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.061035  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:01.061041  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:01.061099  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:01.090375  604010 cri.go:89] found id: ""
	I1213 11:57:01.090403  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.090412  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:01.090418  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:01.090476  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:01.118417  604010 cri.go:89] found id: ""
	I1213 11:57:01.118441  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.118450  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:01.118456  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:01.118521  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:01.147901  604010 cri.go:89] found id: ""
	I1213 11:57:01.147929  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.147938  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:01.147946  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:01.148009  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:01.207604  604010 cri.go:89] found id: ""
	I1213 11:57:01.207681  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.207708  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:01.207727  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:01.207818  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:01.263340  604010 cri.go:89] found id: ""
	I1213 11:57:01.263407  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.263428  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:01.263446  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:01.263531  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:01.296139  604010 cri.go:89] found id: ""
	I1213 11:57:01.296213  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.296231  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:01.296242  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:01.296313  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:01.323150  604010 cri.go:89] found id: ""
	I1213 11:57:01.323175  604010 logs.go:282] 0 containers: []
	W1213 11:57:01.323185  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:01.323194  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:01.323206  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:01.351631  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:01.351659  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:01.410361  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:01.410398  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:01.426884  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:01.426921  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:01.495923  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:01.487940   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.488738   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.490397   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.490777   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.492041   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:01.487940   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.488738   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.490397   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.490777   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:01.492041   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:01.495947  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:01.495960  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:04.023306  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:04.034376  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:04.034451  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:04.058883  604010 cri.go:89] found id: ""
	I1213 11:57:04.058911  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.058921  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:04.058929  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:04.058990  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:04.084571  604010 cri.go:89] found id: ""
	I1213 11:57:04.084598  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.084607  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:04.084615  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:04.084698  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:04.111492  604010 cri.go:89] found id: ""
	I1213 11:57:04.111518  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.111527  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:04.111534  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:04.111594  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:04.140605  604010 cri.go:89] found id: ""
	I1213 11:57:04.140632  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.140641  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:04.140648  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:04.140709  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:04.170556  604010 cri.go:89] found id: ""
	I1213 11:57:04.170583  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.170592  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:04.170598  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:04.170654  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:04.221024  604010 cri.go:89] found id: ""
	I1213 11:57:04.221047  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.221056  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:04.221062  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:04.221120  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:04.258557  604010 cri.go:89] found id: ""
	I1213 11:57:04.258583  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.258601  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:04.258608  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:04.258667  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:04.286096  604010 cri.go:89] found id: ""
	I1213 11:57:04.286121  604010 logs.go:282] 0 containers: []
	W1213 11:57:04.286130  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:04.286140  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:04.286154  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:04.342856  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:04.342892  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:04.359212  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:04.359247  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:04.426841  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:04.417916   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.418505   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.420627   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.421110   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.422742   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:04.417916   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.418505   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.420627   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.421110   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:04.422742   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:04.426863  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:04.426876  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:04.452958  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:04.452999  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:06.985291  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:06.996435  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:06.996506  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:07.027757  604010 cri.go:89] found id: ""
	I1213 11:57:07.027792  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.027802  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:07.027808  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:07.027875  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:07.053033  604010 cri.go:89] found id: ""
	I1213 11:57:07.053059  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.053068  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:07.053075  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:07.053135  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:07.077293  604010 cri.go:89] found id: ""
	I1213 11:57:07.077320  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.077330  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:07.077336  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:07.077400  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:07.101590  604010 cri.go:89] found id: ""
	I1213 11:57:07.101615  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.101630  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:07.101636  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:07.101693  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:07.129837  604010 cri.go:89] found id: ""
	I1213 11:57:07.129867  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.129877  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:07.129883  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:07.129943  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:07.155693  604010 cri.go:89] found id: ""
	I1213 11:57:07.155719  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.155729  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:07.155735  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:07.155799  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:07.208290  604010 cri.go:89] found id: ""
	I1213 11:57:07.208318  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.208327  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:07.208334  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:07.208398  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:07.260450  604010 cri.go:89] found id: ""
	I1213 11:57:07.260475  604010 logs.go:282] 0 containers: []
	W1213 11:57:07.260485  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:07.260494  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:07.260505  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:07.317882  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:07.317918  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:07.334495  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:07.334524  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:07.403490  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:07.393965   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.394975   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.396603   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.397190   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.398983   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:07.393965   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.394975   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.396603   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.397190   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:07.398983   11443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:07.403516  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:07.403531  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:07.428864  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:07.428901  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:09.962852  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:09.973890  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:09.973963  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:10.008764  604010 cri.go:89] found id: ""
	I1213 11:57:10.008791  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.008801  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:10.008808  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:10.008881  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:10.042627  604010 cri.go:89] found id: ""
	I1213 11:57:10.042655  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.042667  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:10.042674  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:10.042762  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:10.070196  604010 cri.go:89] found id: ""
	I1213 11:57:10.070222  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.070231  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:10.070238  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:10.070304  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:10.097458  604010 cri.go:89] found id: ""
	I1213 11:57:10.097484  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.097493  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:10.097500  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:10.097559  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:10.124061  604010 cri.go:89] found id: ""
	I1213 11:57:10.124087  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.124095  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:10.124101  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:10.124158  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:10.153659  604010 cri.go:89] found id: ""
	I1213 11:57:10.153696  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.153705  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:10.153713  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:10.153792  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:10.226910  604010 cri.go:89] found id: ""
	I1213 11:57:10.226938  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.226947  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:10.226953  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:10.227010  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:10.265652  604010 cri.go:89] found id: ""
	I1213 11:57:10.265676  604010 logs.go:282] 0 containers: []
	W1213 11:57:10.265685  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:10.265695  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:10.265707  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:10.332797  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:10.323569   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.325115   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.325998   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.326908   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.328530   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:10.323569   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.325115   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.325998   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.326908   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:10.328530   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:10.332820  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:10.332832  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:10.357553  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:10.357592  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:10.391809  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:10.391838  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:10.447255  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:10.447293  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:12.963670  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:12.974670  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:12.974767  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:13.006230  604010 cri.go:89] found id: ""
	I1213 11:57:13.006259  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.006268  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:13.006275  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:13.006340  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:13.031301  604010 cri.go:89] found id: ""
	I1213 11:57:13.031325  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.031334  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:13.031340  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:13.031396  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:13.055897  604010 cri.go:89] found id: ""
	I1213 11:57:13.055927  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.055936  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:13.055942  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:13.056003  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:13.081708  604010 cri.go:89] found id: ""
	I1213 11:57:13.081733  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.081748  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:13.081755  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:13.081812  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:13.111812  604010 cri.go:89] found id: ""
	I1213 11:57:13.111885  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.111900  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:13.111909  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:13.111971  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:13.136957  604010 cri.go:89] found id: ""
	I1213 11:57:13.136992  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.137001  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:13.137025  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:13.137099  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:13.180320  604010 cri.go:89] found id: ""
	I1213 11:57:13.180354  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.180363  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:13.180370  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:13.180438  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:13.232992  604010 cri.go:89] found id: ""
	I1213 11:57:13.233027  604010 logs.go:282] 0 containers: []
	W1213 11:57:13.233037  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:13.233047  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:13.233060  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:13.306234  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:13.297958   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.298476   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.299586   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.299955   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.301394   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:13.297958   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.298476   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.299586   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.299955   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:13.301394   11664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:13.306257  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:13.306272  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:13.331798  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:13.331837  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:13.364219  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:13.364248  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:13.419158  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:13.419191  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:15.935716  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:15.946701  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:15.946796  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:15.972298  604010 cri.go:89] found id: ""
	I1213 11:57:15.972375  604010 logs.go:282] 0 containers: []
	W1213 11:57:15.972392  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:15.972399  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:15.972468  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:15.997435  604010 cri.go:89] found id: ""
	I1213 11:57:15.997458  604010 logs.go:282] 0 containers: []
	W1213 11:57:15.997467  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:15.997474  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:15.997540  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:16.026069  604010 cri.go:89] found id: ""
	I1213 11:57:16.026107  604010 logs.go:282] 0 containers: []
	W1213 11:57:16.026116  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:16.026123  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:16.026190  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:16.051047  604010 cri.go:89] found id: ""
	I1213 11:57:16.051125  604010 logs.go:282] 0 containers: []
	W1213 11:57:16.051141  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:16.051149  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:16.051209  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:16.076992  604010 cri.go:89] found id: ""
	I1213 11:57:16.077060  604010 logs.go:282] 0 containers: []
	W1213 11:57:16.077086  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:16.077104  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:16.077190  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:16.104719  604010 cri.go:89] found id: ""
	I1213 11:57:16.104788  604010 logs.go:282] 0 containers: []
	W1213 11:57:16.104811  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:16.104830  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:16.104918  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:16.136668  604010 cri.go:89] found id: ""
	I1213 11:57:16.136696  604010 logs.go:282] 0 containers: []
	W1213 11:57:16.136705  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:16.136712  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:16.136772  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:16.184065  604010 cri.go:89] found id: ""
	I1213 11:57:16.184100  604010 logs.go:282] 0 containers: []
	W1213 11:57:16.184111  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:16.184120  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:16.184153  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:16.270928  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:16.270968  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:16.287140  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:16.287175  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:16.357398  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:16.349038   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.349516   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.351357   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.351864   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.353557   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:16.349038   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.349516   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.351357   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.351864   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:16.353557   11783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:16.357423  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:16.357435  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:16.381740  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:16.381774  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:18.910619  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:18.921087  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:18.921166  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:18.946478  604010 cri.go:89] found id: ""
	I1213 11:57:18.946503  604010 logs.go:282] 0 containers: []
	W1213 11:57:18.946512  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:18.946519  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:18.946578  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:18.971279  604010 cri.go:89] found id: ""
	I1213 11:57:18.971304  604010 logs.go:282] 0 containers: []
	W1213 11:57:18.971313  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:18.971320  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:18.971378  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:18.996033  604010 cri.go:89] found id: ""
	I1213 11:57:18.996059  604010 logs.go:282] 0 containers: []
	W1213 11:57:18.996068  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:18.996074  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:18.996158  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:19.021977  604010 cri.go:89] found id: ""
	I1213 11:57:19.022006  604010 logs.go:282] 0 containers: []
	W1213 11:57:19.022015  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:19.022024  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:19.022086  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:19.046193  604010 cri.go:89] found id: ""
	I1213 11:57:19.046221  604010 logs.go:282] 0 containers: []
	W1213 11:57:19.046230  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:19.046236  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:19.046297  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:19.070868  604010 cri.go:89] found id: ""
	I1213 11:57:19.070895  604010 logs.go:282] 0 containers: []
	W1213 11:57:19.070904  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:19.070911  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:19.071001  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:19.096253  604010 cri.go:89] found id: ""
	I1213 11:57:19.096276  604010 logs.go:282] 0 containers: []
	W1213 11:57:19.096285  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:19.096292  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:19.096373  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:19.121131  604010 cri.go:89] found id: ""
	I1213 11:57:19.121167  604010 logs.go:282] 0 containers: []
	W1213 11:57:19.121177  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:19.121186  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:19.121216  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:19.208507  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:19.190547   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.191444   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.193889   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.194572   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.199234   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:19.190547   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.191444   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.193889   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.194572   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:19.199234   11885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:19.208539  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:19.208553  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:19.237572  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:19.237656  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:19.276423  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:19.276448  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:19.334610  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:19.334648  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:21.851744  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:21.861936  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:21.861999  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:21.885880  604010 cri.go:89] found id: ""
	I1213 11:57:21.885901  604010 logs.go:282] 0 containers: []
	W1213 11:57:21.885909  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:21.885916  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:21.885971  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:21.909866  604010 cri.go:89] found id: ""
	I1213 11:57:21.909889  604010 logs.go:282] 0 containers: []
	W1213 11:57:21.909898  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:21.909904  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:21.909961  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:21.934547  604010 cri.go:89] found id: ""
	I1213 11:57:21.934576  604010 logs.go:282] 0 containers: []
	W1213 11:57:21.934585  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:21.934591  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:21.934651  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:21.959889  604010 cri.go:89] found id: ""
	I1213 11:57:21.959915  604010 logs.go:282] 0 containers: []
	W1213 11:57:21.959925  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:21.959932  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:21.959988  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:21.989023  604010 cri.go:89] found id: ""
	I1213 11:57:21.989099  604010 logs.go:282] 0 containers: []
	W1213 11:57:21.989134  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:21.989159  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:21.989243  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:22.019806  604010 cri.go:89] found id: ""
	I1213 11:57:22.019848  604010 logs.go:282] 0 containers: []
	W1213 11:57:22.019861  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:22.019868  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:22.019934  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:22.044814  604010 cri.go:89] found id: ""
	I1213 11:57:22.044841  604010 logs.go:282] 0 containers: []
	W1213 11:57:22.044852  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:22.044858  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:22.044923  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:22.074682  604010 cri.go:89] found id: ""
	I1213 11:57:22.074726  604010 logs.go:282] 0 containers: []
	W1213 11:57:22.074735  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:22.074745  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:22.074757  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:22.150025  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:22.141291   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.141746   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.143484   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.144157   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.146009   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:22.141291   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.141746   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.143484   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.144157   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:22.146009   11998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:22.150049  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:22.150062  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:22.178881  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:22.178917  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:22.216709  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:22.216740  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:22.281457  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:22.281489  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:24.798312  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:24.808695  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:24.808764  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:24.835809  604010 cri.go:89] found id: ""
	I1213 11:57:24.835839  604010 logs.go:282] 0 containers: []
	W1213 11:57:24.835848  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:24.835855  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:24.835913  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:24.864535  604010 cri.go:89] found id: ""
	I1213 11:57:24.864560  604010 logs.go:282] 0 containers: []
	W1213 11:57:24.864568  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:24.864574  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:24.864630  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:24.894267  604010 cri.go:89] found id: ""
	I1213 11:57:24.894290  604010 logs.go:282] 0 containers: []
	W1213 11:57:24.894299  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:24.894305  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:24.894364  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:24.923204  604010 cri.go:89] found id: ""
	I1213 11:57:24.923237  604010 logs.go:282] 0 containers: []
	W1213 11:57:24.923248  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:24.923254  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:24.923313  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:24.957663  604010 cri.go:89] found id: ""
	I1213 11:57:24.957689  604010 logs.go:282] 0 containers: []
	W1213 11:57:24.957698  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:24.957705  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:24.957786  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:24.982499  604010 cri.go:89] found id: ""
	I1213 11:57:24.982524  604010 logs.go:282] 0 containers: []
	W1213 11:57:24.982533  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:24.982539  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:24.982596  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:25.013305  604010 cri.go:89] found id: ""
	I1213 11:57:25.013332  604010 logs.go:282] 0 containers: []
	W1213 11:57:25.013342  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:25.013348  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:25.013426  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:25.042403  604010 cri.go:89] found id: ""
	I1213 11:57:25.042429  604010 logs.go:282] 0 containers: []
	W1213 11:57:25.042440  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:25.042450  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:25.042462  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:25.110074  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:25.100728   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.101372   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.103156   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.103840   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.106138   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:25.100728   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.101372   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.103156   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.103840   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:25.106138   12114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:25.110097  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:25.110109  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:25.136135  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:25.136175  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:25.187750  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:25.187781  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:25.269417  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:25.269496  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:27.795410  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:27.806308  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:27.806393  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:27.833178  604010 cri.go:89] found id: ""
	I1213 11:57:27.833204  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.833213  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:27.833220  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:27.833280  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:27.864759  604010 cri.go:89] found id: ""
	I1213 11:57:27.864790  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.864800  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:27.864807  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:27.864870  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:27.894576  604010 cri.go:89] found id: ""
	I1213 11:57:27.894643  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.894668  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:27.894722  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:27.894809  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:27.919695  604010 cri.go:89] found id: ""
	I1213 11:57:27.919720  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.919728  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:27.919735  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:27.919809  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:27.944128  604010 cri.go:89] found id: ""
	I1213 11:57:27.944152  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.944161  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:27.944168  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:27.944247  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:27.968369  604010 cri.go:89] found id: ""
	I1213 11:57:27.968393  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.968402  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:27.968409  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:27.968507  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:27.997345  604010 cri.go:89] found id: ""
	I1213 11:57:27.997372  604010 logs.go:282] 0 containers: []
	W1213 11:57:27.997381  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:27.997388  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:27.997451  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:28.029787  604010 cri.go:89] found id: ""
	I1213 11:57:28.029815  604010 logs.go:282] 0 containers: []
	W1213 11:57:28.029825  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:28.029837  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:28.029851  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:28.059897  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:28.059930  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:28.116398  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:28.116433  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:28.133239  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:28.133269  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:28.257725  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:28.249038   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.249625   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.251202   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.251730   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.253377   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:28.249038   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.249625   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.251202   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.251730   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:28.253377   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:28.257746  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:28.257758  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:30.784544  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:30.795049  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:30.795122  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:30.819394  604010 cri.go:89] found id: ""
	I1213 11:57:30.819419  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.819427  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:30.819434  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:30.819491  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:30.843159  604010 cri.go:89] found id: ""
	I1213 11:57:30.843184  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.843193  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:30.843199  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:30.843254  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:30.869845  604010 cri.go:89] found id: ""
	I1213 11:57:30.869867  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.869876  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:30.869885  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:30.869941  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:30.896812  604010 cri.go:89] found id: ""
	I1213 11:57:30.896836  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.896845  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:30.896853  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:30.896913  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:30.921770  604010 cri.go:89] found id: ""
	I1213 11:57:30.921794  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.921804  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:30.921810  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:30.921867  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:30.948842  604010 cri.go:89] found id: ""
	I1213 11:57:30.948869  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.948878  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:30.948885  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:30.948941  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:30.975761  604010 cri.go:89] found id: ""
	I1213 11:57:30.975785  604010 logs.go:282] 0 containers: []
	W1213 11:57:30.975794  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:30.975800  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:30.975861  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:31.009297  604010 cri.go:89] found id: ""
	I1213 11:57:31.009324  604010 logs.go:282] 0 containers: []
	W1213 11:57:31.009333  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:31.009344  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:31.009357  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:31.026148  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:31.026228  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:31.092501  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:31.083099   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.083809   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.085589   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.086335   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.087969   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:31.083099   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.083809   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.085589   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.086335   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:31.087969   12350 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:31.092527  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:31.092540  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:31.119062  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:31.119100  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:31.148109  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:31.148140  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:33.733415  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:33.744879  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:33.744947  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:33.769975  604010 cri.go:89] found id: ""
	I1213 11:57:33.770002  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.770012  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:33.770019  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:33.770118  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:33.795564  604010 cri.go:89] found id: ""
	I1213 11:57:33.795587  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.795595  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:33.795602  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:33.795658  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:33.820165  604010 cri.go:89] found id: ""
	I1213 11:57:33.820189  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.820197  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:33.820205  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:33.820266  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:33.850474  604010 cri.go:89] found id: ""
	I1213 11:57:33.850496  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.850504  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:33.850511  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:33.850571  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:33.875577  604010 cri.go:89] found id: ""
	I1213 11:57:33.875599  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.875613  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:33.875620  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:33.875676  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:33.899672  604010 cri.go:89] found id: ""
	I1213 11:57:33.899696  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.899704  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:33.899711  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:33.899771  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:33.924330  604010 cri.go:89] found id: ""
	I1213 11:57:33.924353  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.924363  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:33.924369  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:33.924426  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:33.948447  604010 cri.go:89] found id: ""
	I1213 11:57:33.948470  604010 logs.go:282] 0 containers: []
	W1213 11:57:33.948479  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:33.948489  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:33.948500  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:34.007962  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:34.008002  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:34.025302  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:34.025333  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:34.092523  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:34.083642   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.084406   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.086056   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.086792   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.088528   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:34.083642   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.084406   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.086056   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.086792   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:34.088528   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:34.092559  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:34.092571  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:34.118672  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:34.118743  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:36.651173  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:36.662055  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:36.662135  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:36.690956  604010 cri.go:89] found id: ""
	I1213 11:57:36.690981  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.690990  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:36.690997  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:36.691067  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:36.716966  604010 cri.go:89] found id: ""
	I1213 11:57:36.716989  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.716998  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:36.717004  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:36.717063  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:36.741609  604010 cri.go:89] found id: ""
	I1213 11:57:36.741651  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.741661  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:36.741667  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:36.741736  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:36.766862  604010 cri.go:89] found id: ""
	I1213 11:57:36.766898  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.766907  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:36.766914  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:36.766978  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:36.792075  604010 cri.go:89] found id: ""
	I1213 11:57:36.792103  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.792112  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:36.792119  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:36.792198  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:36.817506  604010 cri.go:89] found id: ""
	I1213 11:57:36.817540  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.817549  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:36.817558  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:36.817624  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:36.842603  604010 cri.go:89] found id: ""
	I1213 11:57:36.842627  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.842635  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:36.842641  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:36.842721  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:36.868253  604010 cri.go:89] found id: ""
	I1213 11:57:36.868276  604010 logs.go:282] 0 containers: []
	W1213 11:57:36.868286  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:36.868295  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:36.868307  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:36.925033  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:36.925067  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:36.941121  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:36.941202  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:37.010945  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:36.998940   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.000295   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.000838   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.002747   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.003200   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:36.998940   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.000295   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.000838   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.002747   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:37.003200   12577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:37.010971  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:37.010986  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:37.039679  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:37.039717  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:39.569521  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:39.580209  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:39.580283  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:39.607577  604010 cri.go:89] found id: ""
	I1213 11:57:39.607609  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.607618  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:39.607625  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:39.607684  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:39.632984  604010 cri.go:89] found id: ""
	I1213 11:57:39.633007  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.633016  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:39.633022  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:39.633079  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:39.660977  604010 cri.go:89] found id: ""
	I1213 11:57:39.661006  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.661016  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:39.661022  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:39.661083  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:39.685387  604010 cri.go:89] found id: ""
	I1213 11:57:39.685414  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.685423  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:39.685430  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:39.685488  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:39.711315  604010 cri.go:89] found id: ""
	I1213 11:57:39.711354  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.711364  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:39.711370  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:39.711434  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:39.736665  604010 cri.go:89] found id: ""
	I1213 11:57:39.736691  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.736700  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:39.736707  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:39.736765  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:39.761215  604010 cri.go:89] found id: ""
	I1213 11:57:39.761240  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.761250  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:39.761257  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:39.761317  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:39.785612  604010 cri.go:89] found id: ""
	I1213 11:57:39.785635  604010 logs.go:282] 0 containers: []
	W1213 11:57:39.785667  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:39.785677  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:39.785688  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:39.818169  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:39.818198  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:39.876172  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:39.876207  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:39.893614  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:39.893697  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:39.961561  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:39.953062   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.953798   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.955462   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.955793   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.957262   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:39.953062   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.953798   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.955462   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.955793   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:39.957262   12706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:39.961582  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:39.961598  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:42.487536  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:42.498423  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:42.498495  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:42.526754  604010 cri.go:89] found id: ""
	I1213 11:57:42.526784  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.526793  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:42.526800  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:42.526866  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:42.557909  604010 cri.go:89] found id: ""
	I1213 11:57:42.557938  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.557948  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:42.557955  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:42.558012  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:42.583283  604010 cri.go:89] found id: ""
	I1213 11:57:42.583311  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.583319  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:42.583325  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:42.583417  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:42.612201  604010 cri.go:89] found id: ""
	I1213 11:57:42.612228  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.612238  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:42.612244  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:42.612304  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:42.636897  604010 cri.go:89] found id: ""
	I1213 11:57:42.636926  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.636935  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:42.636942  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:42.637003  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:42.662077  604010 cri.go:89] found id: ""
	I1213 11:57:42.662101  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.662109  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:42.662116  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:42.662181  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:42.689090  604010 cri.go:89] found id: ""
	I1213 11:57:42.689117  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.689126  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:42.689132  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:42.689194  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:42.714186  604010 cri.go:89] found id: ""
	I1213 11:57:42.714220  604010 logs.go:282] 0 containers: []
	W1213 11:57:42.714229  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:42.714239  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:42.714253  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:42.730012  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:42.730043  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:42.793528  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:42.784227   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.785106   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.787066   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.787860   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.789513   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:42.784227   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.785106   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.787066   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.787860   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:42.789513   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:42.793550  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:42.793562  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:42.820504  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:42.820540  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:42.850739  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:42.850772  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:45.416253  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:45.428104  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:45.428174  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:45.486919  604010 cri.go:89] found id: ""
	I1213 11:57:45.486943  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.486952  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:45.486959  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:45.487018  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:45.518438  604010 cri.go:89] found id: ""
	I1213 11:57:45.518466  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.518475  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:45.518482  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:45.518539  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:45.543147  604010 cri.go:89] found id: ""
	I1213 11:57:45.543174  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.543183  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:45.543189  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:45.543247  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:45.568184  604010 cri.go:89] found id: ""
	I1213 11:57:45.568210  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.568219  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:45.568226  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:45.568283  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:45.597036  604010 cri.go:89] found id: ""
	I1213 11:57:45.597062  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.597072  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:45.597078  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:45.597140  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:45.625538  604010 cri.go:89] found id: ""
	I1213 11:57:45.625563  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.625572  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:45.625579  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:45.625664  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:45.650305  604010 cri.go:89] found id: ""
	I1213 11:57:45.650340  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.650350  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:45.650356  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:45.650415  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:45.674642  604010 cri.go:89] found id: ""
	I1213 11:57:45.674668  604010 logs.go:282] 0 containers: []
	W1213 11:57:45.674677  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:45.674723  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:45.674736  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:45.737984  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:45.729194   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.729808   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.731387   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.731876   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.733423   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:45.729194   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.729808   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.731387   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.731876   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:45.733423   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:45.738014  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:45.738030  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:45.764253  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:45.764293  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:45.794872  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:45.794900  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:45.852148  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:45.852181  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:48.369680  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:48.381452  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:48.381527  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:48.406963  604010 cri.go:89] found id: ""
	I1213 11:57:48.406989  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.406998  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:48.407004  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:48.407069  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:48.453016  604010 cri.go:89] found id: ""
	I1213 11:57:48.453043  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.453052  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:48.453060  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:48.453120  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:48.512775  604010 cri.go:89] found id: ""
	I1213 11:57:48.512806  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.512815  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:48.512821  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:48.512879  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:48.538032  604010 cri.go:89] found id: ""
	I1213 11:57:48.538055  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.538064  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:48.538070  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:48.538129  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:48.562781  604010 cri.go:89] found id: ""
	I1213 11:57:48.562815  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.562831  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:48.562841  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:48.562899  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:48.592224  604010 cri.go:89] found id: ""
	I1213 11:57:48.592249  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.592258  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:48.592265  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:48.592324  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:48.616499  604010 cri.go:89] found id: ""
	I1213 11:57:48.616524  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.616533  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:48.616540  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:48.616604  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:48.641140  604010 cri.go:89] found id: ""
	I1213 11:57:48.641164  604010 logs.go:282] 0 containers: []
	W1213 11:57:48.641173  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:48.641183  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:48.641193  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:48.667031  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:48.667069  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:48.696402  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:48.696431  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:48.752046  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:48.752080  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:48.768352  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:48.768382  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:48.835752  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:48.828038   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.828514   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.830127   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.830542   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.831979   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:48.828038   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.828514   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.830127   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.830542   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:48.831979   13048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:51.337160  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:51.349596  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:51.349697  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:51.384310  604010 cri.go:89] found id: ""
	I1213 11:57:51.384341  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.384350  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:51.384358  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:51.384415  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:51.409502  604010 cri.go:89] found id: ""
	I1213 11:57:51.409523  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.409532  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:51.409539  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:51.409595  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:51.444866  604010 cri.go:89] found id: ""
	I1213 11:57:51.444887  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.444896  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:51.444901  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:51.444957  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:51.498878  604010 cri.go:89] found id: ""
	I1213 11:57:51.498900  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.498908  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:51.498915  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:51.498970  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:51.532054  604010 cri.go:89] found id: ""
	I1213 11:57:51.532082  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.532091  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:51.532098  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:51.532159  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:51.561798  604010 cri.go:89] found id: ""
	I1213 11:57:51.561833  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.561842  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:51.561849  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:51.561906  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:51.586723  604010 cri.go:89] found id: ""
	I1213 11:57:51.586798  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.586820  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:51.586843  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:51.586951  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:51.612513  604010 cri.go:89] found id: ""
	I1213 11:57:51.612538  604010 logs.go:282] 0 containers: []
	W1213 11:57:51.612547  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:51.612557  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:51.612569  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:51.628622  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:51.628650  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:51.699783  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:51.691237   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.691797   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.693193   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.693944   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.695717   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:51.691237   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.691797   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.693193   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.693944   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:51.695717   13147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:51.699815  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:51.699832  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:51.725055  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:51.725092  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:51.758574  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:51.758604  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:54.315140  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:54.325600  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:57:54.325693  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:57:54.352056  604010 cri.go:89] found id: ""
	I1213 11:57:54.352081  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.352089  604010 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:57:54.352096  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 11:57:54.352157  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:57:54.375586  604010 cri.go:89] found id: ""
	I1213 11:57:54.375611  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.375620  604010 logs.go:284] No container was found matching "etcd"
	I1213 11:57:54.375626  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 11:57:54.375683  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:57:54.399138  604010 cri.go:89] found id: ""
	I1213 11:57:54.399163  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.399172  604010 logs.go:284] No container was found matching "coredns"
	I1213 11:57:54.399178  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:57:54.399234  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:57:54.439999  604010 cri.go:89] found id: ""
	I1213 11:57:54.440025  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.440033  604010 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:57:54.440039  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:57:54.440096  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:57:54.505093  604010 cri.go:89] found id: ""
	I1213 11:57:54.505124  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.505133  604010 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:57:54.505140  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:57:54.505198  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:57:54.529921  604010 cri.go:89] found id: ""
	I1213 11:57:54.529947  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.529956  604010 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:57:54.529966  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 11:57:54.530029  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:57:54.556363  604010 cri.go:89] found id: ""
	I1213 11:57:54.556390  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.556399  604010 logs.go:284] No container was found matching "kindnet"
	I1213 11:57:54.556406  604010 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 11:57:54.556483  604010 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 11:57:54.581531  604010 cri.go:89] found id: ""
	I1213 11:57:54.581556  604010 logs.go:282] 0 containers: []
	W1213 11:57:54.581565  604010 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 11:57:54.581574  604010 logs.go:123] Gathering logs for kubelet ...
	I1213 11:57:54.581603  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:57:54.637009  604010 logs.go:123] Gathering logs for dmesg ...
	I1213 11:57:54.637043  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:57:54.652919  604010 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:57:54.652949  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:57:54.717113  604010 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:57:54.708684   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.709580   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.711317   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.711640   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.713137   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:57:54.708684   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.709580   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.711317   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.711640   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:57:54.713137   13260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:57:54.717134  604010 logs.go:123] Gathering logs for containerd ...
	I1213 11:57:54.717148  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 11:57:54.743116  604010 logs.go:123] Gathering logs for container status ...
	I1213 11:57:54.743151  604010 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:57:57.272010  604010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:57:57.285875  604010 out.go:203] 
	W1213 11:57:57.288788  604010 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 11:57:57.288838  604010 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 11:57:57.288853  604010 out.go:285] * Related issues:
	W1213 11:57:57.288872  604010 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1213 11:57:57.288889  604010 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1213 11:57:57.291728  604010 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355817742Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355832504Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355869739Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355890810Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355900722Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355913464Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355922515Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355936029Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355951643Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.355983734Z" level=info msg="Connect containerd service"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.356248656Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.356827911Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.372443055Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.372505251Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.372539417Z" level=info msg="Start subscribing containerd event"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.372587426Z" level=info msg="Start recovering state"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.413846470Z" level=info msg="Start event monitor"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.413904095Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.413916928Z" level=info msg="Start streaming server"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.413926332Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.413934643Z" level=info msg="runtime interface starting up..."
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.413940961Z" level=info msg="starting plugins..."
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.413972059Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 11:51:54 newest-cni-796924 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 11:51:54 newest-cni-796924 containerd[555]: time="2025-12-13T11:51:54.415701136Z" level=info msg="containerd successfully booted in 0.081179s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:58:10.588262   13927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:58:10.589614   13927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:58:10.590251   13927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:58:10.591398   13927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 11:58:10.592090   13927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +12.907693] overlayfs: idmapped layers are currently not supported
	[Dec13 09:25] overlayfs: idmapped layers are currently not supported
	[ +26.192425] overlayfs: idmapped layers are currently not supported
	[Dec13 09:26] overlayfs: idmapped layers are currently not supported
	[ +25.729788] overlayfs: idmapped layers are currently not supported
	[Dec13 09:27] overlayfs: idmapped layers are currently not supported
	[Dec13 09:28] overlayfs: idmapped layers are currently not supported
	[Dec13 09:31] overlayfs: idmapped layers are currently not supported
	[Dec13 09:32] overlayfs: idmapped layers are currently not supported
	[ +17.979093] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec13 09:43] overlayfs: idmapped layers are currently not supported
	[Dec13 09:45] overlayfs: idmapped layers are currently not supported
	[ +25.885102] overlayfs: idmapped layers are currently not supported
	[Dec13 09:46] overlayfs: idmapped layers are currently not supported
	[ +22.078149] overlayfs: idmapped layers are currently not supported
	[Dec13 09:47] overlayfs: idmapped layers are currently not supported
	[Dec13 09:48] overlayfs: idmapped layers are currently not supported
	[Dec13 09:49] overlayfs: idmapped layers are currently not supported
	[Dec13 09:51] overlayfs: idmapped layers are currently not supported
	[ +17.043564] overlayfs: idmapped layers are currently not supported
	[Dec13 09:52] overlayfs: idmapped layers are currently not supported
	[Dec13 09:53] overlayfs: idmapped layers are currently not supported
	[Dec13 10:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:19] hrtimer: interrupt took 21247146 ns
	
	
	==> kernel <==
	 11:58:10 up  4:40,  0 user,  load average: 0.86, 0.90, 1.24
	Linux newest-cni-796924 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 11:58:07 newest-cni-796924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:58:08 newest-cni-796924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
	Dec 13 11:58:08 newest-cni-796924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:58:08 newest-cni-796924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:58:08 newest-cni-796924 kubelet[13787]: E1213 11:58:08.300569   13787 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:58:08 newest-cni-796924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:58:08 newest-cni-796924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:58:08 newest-cni-796924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
	Dec 13 11:58:08 newest-cni-796924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:58:08 newest-cni-796924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:58:08 newest-cni-796924 kubelet[13815]: E1213 11:58:08.993543   13815 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:58:09 newest-cni-796924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:58:09 newest-cni-796924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:58:09 newest-cni-796924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
	Dec 13 11:58:09 newest-cni-796924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:58:09 newest-cni-796924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:58:09 newest-cni-796924 kubelet[13829]: E1213 11:58:09.778673   13829 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:58:09 newest-cni-796924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:58:09 newest-cni-796924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:58:10 newest-cni-796924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8.
	Dec 13 11:58:10 newest-cni-796924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:58:10 newest-cni-796924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:58:10 newest-cni-796924 kubelet[13908]: E1213 11:58:10.506835   13908 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:58:10 newest-cni-796924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:58:10 newest-cni-796924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-796924 -n newest-cni-796924
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-796924 -n newest-cni-796924: exit status 2 (335.998874ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-796924" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (9.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (284.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1213 12:03:54.776868  308915 config.go:182] Loaded profile config "custom-flannel-270721": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:04:07.247032  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/auto-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:04:07.253391  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/auto-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:04:07.264685  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/auto-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:04:07.286053  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/auto-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:04:07.327455  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/auto-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:04:07.409150  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/auto-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:04:07.570668  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/auto-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:04:07.893028  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/auto-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:04:08.535429  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/auto-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:04:09.817079  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/auto-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:04:12.378653  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/auto-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:04:17.500440  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/auto-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:04:27.742385  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/auto-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:04:47.365307  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:04:48.224419  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/auto-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:05:08.208945  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/old-k8s-version-624185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:05:12.241296  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:05:12.404112  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/default-k8s-diff-port-191845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1213 12:05:24.699292  308915 config.go:182] Loaded profile config "kindnet-270721": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:05:32.257396  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/flannel-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:05:32.263796  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/flannel-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:05:32.275274  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/flannel-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:05:32.297005  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/flannel-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:05:32.338431  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/flannel-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:05:32.420118  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/flannel-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:05:32.581645  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/flannel-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:05:32.903955  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/flannel-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:05:33.546045  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/flannel-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:05:34.827892  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/flannel-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:05:37.389921  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/flannel-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:05:42.511183  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/flannel-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:05:52.753018  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/flannel-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:06:13.234358  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/flannel-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:06:31.167370  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-333352 -n no-preload-333352
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-333352 -n no-preload-333352: exit status 2 (323.773087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-333352" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-333352 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-333352 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.805µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-333352 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-333352
helpers_test.go:244: (dbg) docker inspect no-preload-333352:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db",
	        "Created": "2025-12-13T11:36:44.52795509Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 597136,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:46:48.212033137Z",
	            "FinishedAt": "2025-12-13T11:46:46.812235669Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/hostname",
	        "HostsPath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/hosts",
	        "LogPath": "/var/lib/docker/containers/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db/ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db-json.log",
	        "Name": "/no-preload-333352",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-333352:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-333352",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ca124efb8aebfb163e007be65b6539527a396b3903d870988080111233d4f8db",
	                "LowerDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891-init/diff:/var/lib/docker/overlay2/5d0ca86320f2c06765995d644a73af30cd6ddb36f5c1a2a6b1ebb24af53cdabd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d63500bace015e4f0992e53022eb307c815344bef0eac4b57fc68ef6b6be3891/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-333352",
	                "Source": "/var/lib/docker/volumes/no-preload-333352/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-333352",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-333352",
	                "name.minikube.sigs.k8s.io": "no-preload-333352",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "368f444acead1313629634c955e38e7aa3bb1a58261aa4f155fef5ab3cc6d2d9",
	            "SandboxKey": "/var/run/docker/netns/368f444acead",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-333352": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:92:40:ad:16:f6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ee20fc50f482b31273047147a2f419c36704bb98933537d0ac5901a560402043",
	                    "EndpointID": "c1aa6ce135257fa89e5e51421f21414b58021c38959e96fd72756c63a958cfdd",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-333352",
	                        "ca124efb8aeb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-333352 -n no-preload-333352
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-333352 -n no-preload-333352: exit status 2 (314.144169ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-333352 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                     ARGS                                                                     │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-270721 sudo systemctl status kubelet --all --full --no-pager                                                                      │ kindnet-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:05 UTC │ 13 Dec 25 12:05 UTC │
	│ ssh     │ -p kindnet-270721 sudo systemctl cat kubelet --no-pager                                                                                      │ kindnet-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:05 UTC │ 13 Dec 25 12:05 UTC │
	│ ssh     │ -p kindnet-270721 sudo journalctl -xeu kubelet --all --full --no-pager                                                                       │ kindnet-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:05 UTC │ 13 Dec 25 12:05 UTC │
	│ ssh     │ -p kindnet-270721 sudo cat /etc/kubernetes/kubelet.conf                                                                                      │ kindnet-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:05 UTC │ 13 Dec 25 12:05 UTC │
	│ ssh     │ -p kindnet-270721 sudo cat /var/lib/kubelet/config.yaml                                                                                      │ kindnet-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:05 UTC │ 13 Dec 25 12:05 UTC │
	│ ssh     │ -p kindnet-270721 sudo systemctl status docker --all --full --no-pager                                                                       │ kindnet-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:05 UTC │                     │
	│ ssh     │ -p kindnet-270721 sudo systemctl cat docker --no-pager                                                                                       │ kindnet-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:05 UTC │ 13 Dec 25 12:05 UTC │
	│ ssh     │ -p kindnet-270721 sudo cat /etc/docker/daemon.json                                                                                           │ kindnet-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:05 UTC │                     │
	│ ssh     │ -p kindnet-270721 sudo docker system info                                                                                                    │ kindnet-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:05 UTC │                     │
	│ ssh     │ -p kindnet-270721 sudo systemctl status cri-docker --all --full --no-pager                                                                   │ kindnet-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:05 UTC │                     │
	│ ssh     │ -p kindnet-270721 sudo systemctl cat cri-docker --no-pager                                                                                   │ kindnet-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:05 UTC │ 13 Dec 25 12:05 UTC │
	│ ssh     │ -p kindnet-270721 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                              │ kindnet-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:05 UTC │                     │
	│ ssh     │ -p kindnet-270721 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                        │ kindnet-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:05 UTC │ 13 Dec 25 12:05 UTC │
	│ ssh     │ -p kindnet-270721 sudo cri-dockerd --version                                                                                                 │ kindnet-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:05 UTC │ 13 Dec 25 12:05 UTC │
	│ ssh     │ -p kindnet-270721 sudo systemctl status containerd --all --full --no-pager                                                                   │ kindnet-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:05 UTC │ 13 Dec 25 12:05 UTC │
	│ ssh     │ -p kindnet-270721 sudo systemctl cat containerd --no-pager                                                                                   │ kindnet-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:05 UTC │ 13 Dec 25 12:05 UTC │
	│ ssh     │ -p kindnet-270721 sudo cat /lib/systemd/system/containerd.service                                                                            │ kindnet-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:05 UTC │ 13 Dec 25 12:05 UTC │
	│ ssh     │ -p kindnet-270721 sudo cat /etc/containerd/config.toml                                                                                       │ kindnet-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:05 UTC │ 13 Dec 25 12:05 UTC │
	│ ssh     │ -p kindnet-270721 sudo containerd config dump                                                                                                │ kindnet-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:05 UTC │ 13 Dec 25 12:05 UTC │
	│ ssh     │ -p kindnet-270721 sudo systemctl status crio --all --full --no-pager                                                                         │ kindnet-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:05 UTC │                     │
	│ ssh     │ -p kindnet-270721 sudo systemctl cat crio --no-pager                                                                                         │ kindnet-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:05 UTC │ 13 Dec 25 12:05 UTC │
	│ ssh     │ -p kindnet-270721 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                               │ kindnet-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:05 UTC │ 13 Dec 25 12:05 UTC │
	│ ssh     │ -p kindnet-270721 sudo crio config                                                                                                           │ kindnet-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:05 UTC │ 13 Dec 25 12:05 UTC │
	│ delete  │ -p kindnet-270721                                                                                                                            │ kindnet-270721 │ jenkins │ v1.37.0 │ 13 Dec 25 12:05 UTC │ 13 Dec 25 12:05 UTC │
	│ start   │ -p bridge-270721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd │ bridge-270721  │ jenkins │ v1.37.0 │ 13 Dec 25 12:05 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 12:05:55
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 12:05:55.815037  658372 out.go:360] Setting OutFile to fd 1 ...
	I1213 12:05:55.815165  658372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 12:05:55.815176  658372 out.go:374] Setting ErrFile to fd 2...
	I1213 12:05:55.815181  658372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 12:05:55.815440  658372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 12:05:55.815863  658372 out.go:368] Setting JSON to false
	I1213 12:05:55.816752  658372 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":17309,"bootTime":1765610247,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 12:05:55.816820  658372 start.go:143] virtualization:  
	I1213 12:05:55.821197  658372 out.go:179] * [bridge-270721] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 12:05:55.825875  658372 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 12:05:55.825972  658372 notify.go:221] Checking for updates...
	I1213 12:05:55.832497  658372 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 12:05:55.836024  658372 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 12:05:55.839305  658372 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 12:05:55.842347  658372 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 12:05:55.845380  658372 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 12:05:55.849111  658372 config.go:182] Loaded profile config "no-preload-333352": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 12:05:55.849213  658372 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 12:05:55.871296  658372 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 12:05:55.871435  658372 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 12:05:55.972886  658372 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 12:05:55.962868213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 12:05:55.972999  658372 docker.go:319] overlay module found
	I1213 12:05:55.979522  658372 out.go:179] * Using the docker driver based on user configuration
	I1213 12:05:55.982658  658372 start.go:309] selected driver: docker
	I1213 12:05:55.982680  658372 start.go:927] validating driver "docker" against <nil>
	I1213 12:05:55.982721  658372 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 12:05:55.983486  658372 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 12:05:56.042270  658372 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 12:05:56.033090824 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 12:05:56.042439  658372 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 12:05:56.042673  658372 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 12:05:56.045763  658372 out.go:179] * Using Docker driver with root privileges
	I1213 12:05:56.048874  658372 cni.go:84] Creating CNI manager for "bridge"
	I1213 12:05:56.048907  658372 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 12:05:56.049009  658372 start.go:353] cluster config:
	{Name:bridge-270721 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-270721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:05:56.052367  658372 out.go:179] * Starting "bridge-270721" primary control-plane node in "bridge-270721" cluster
	I1213 12:05:56.055318  658372 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 12:05:56.058365  658372 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 12:05:56.061270  658372 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 12:05:56.061327  658372 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1213 12:05:56.061338  658372 cache.go:65] Caching tarball of preloaded images
	I1213 12:05:56.061408  658372 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 12:05:56.061429  658372 preload.go:238] Found /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 12:05:56.061447  658372 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1213 12:05:56.061561  658372 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/config.json ...
	I1213 12:05:56.061585  658372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/config.json: {Name:mk7ec3c47b8adef03ca30d5ed79f0053eb75cef6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:05:56.081800  658372 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 12:05:56.081825  658372 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 12:05:56.081840  658372 cache.go:243] Successfully downloaded all kic artifacts
	I1213 12:05:56.081874  658372 start.go:360] acquireMachinesLock for bridge-270721: {Name:mka3d6aa01e669ae4736ccb7d676700a3900cdfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:05:56.081993  658372 start.go:364] duration metric: took 97.437µs to acquireMachinesLock for "bridge-270721"
	I1213 12:05:56.082025  658372 start.go:93] Provisioning new machine with config: &{Name:bridge-270721 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-270721 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 12:05:56.082105  658372 start.go:125] createHost starting for "" (driver="docker")
	I1213 12:05:56.085579  658372 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 12:05:56.085834  658372 start.go:159] libmachine.API.Create for "bridge-270721" (driver="docker")
	I1213 12:05:56.085878  658372 client.go:173] LocalClient.Create starting
	I1213 12:05:56.085947  658372 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem
	I1213 12:05:56.085989  658372 main.go:143] libmachine: Decoding PEM data...
	I1213 12:05:56.086009  658372 main.go:143] libmachine: Parsing certificate...
	I1213 12:05:56.086064  658372 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem
	I1213 12:05:56.086087  658372 main.go:143] libmachine: Decoding PEM data...
	I1213 12:05:56.086102  658372 main.go:143] libmachine: Parsing certificate...
	I1213 12:05:56.086466  658372 cli_runner.go:164] Run: docker network inspect bridge-270721 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 12:05:56.103355  658372 cli_runner.go:211] docker network inspect bridge-270721 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 12:05:56.103439  658372 network_create.go:284] running [docker network inspect bridge-270721] to gather additional debugging logs...
	I1213 12:05:56.103459  658372 cli_runner.go:164] Run: docker network inspect bridge-270721
	W1213 12:05:56.119383  658372 cli_runner.go:211] docker network inspect bridge-270721 returned with exit code 1
	I1213 12:05:56.119414  658372 network_create.go:287] error running [docker network inspect bridge-270721]: docker network inspect bridge-270721: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network bridge-270721 not found
	I1213 12:05:56.119428  658372 network_create.go:289] output of [docker network inspect bridge-270721]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network bridge-270721 not found
	
	** /stderr **
	I1213 12:05:56.119527  658372 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 12:05:56.136945  658372 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-381e4ce3c9ab IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:2d:23:57:0e:cc} reservation:<nil>}
	I1213 12:05:56.137382  658372 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bd1082d121b0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7a:42:ce:41:ea:ae} reservation:<nil>}
	I1213 12:05:56.137842  658372 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ebeb7162e340 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:cf:aa:41:ac:19} reservation:<nil>}
	I1213 12:05:56.138392  658372 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019fae30}
	I1213 12:05:56.138417  658372 network_create.go:124] attempt to create docker network bridge-270721 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 12:05:56.138492  658372 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-270721 bridge-270721
	I1213 12:05:56.201245  658372 network_create.go:108] docker network bridge-270721 192.168.76.0/24 created
	I1213 12:05:56.201283  658372 kic.go:121] calculated static IP "192.168.76.2" for the "bridge-270721" container
	I1213 12:05:56.201379  658372 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 12:05:56.217826  658372 cli_runner.go:164] Run: docker volume create bridge-270721 --label name.minikube.sigs.k8s.io=bridge-270721 --label created_by.minikube.sigs.k8s.io=true
	I1213 12:05:56.236015  658372 oci.go:103] Successfully created a docker volume bridge-270721
	I1213 12:05:56.236115  658372 cli_runner.go:164] Run: docker run --rm --name bridge-270721-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-270721 --entrypoint /usr/bin/test -v bridge-270721:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 12:05:56.764572  658372 oci.go:107] Successfully prepared a docker volume bridge-270721
	I1213 12:05:56.764633  658372 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 12:05:56.764642  658372 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 12:05:56.764708  658372 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v bridge-270721:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 12:06:00.758244  658372 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v bridge-270721:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.993482858s)
	I1213 12:06:00.758278  658372 kic.go:203] duration metric: took 3.993631356s to extract preloaded images to volume ...
	W1213 12:06:00.758421  658372 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 12:06:00.758524  658372 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 12:06:00.810572  658372 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-270721 --name bridge-270721 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-270721 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-270721 --network bridge-270721 --ip 192.168.76.2 --volume bridge-270721:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 12:06:01.127118  658372 cli_runner.go:164] Run: docker container inspect bridge-270721 --format={{.State.Running}}
	I1213 12:06:01.149838  658372 cli_runner.go:164] Run: docker container inspect bridge-270721 --format={{.State.Status}}
	I1213 12:06:01.185515  658372 cli_runner.go:164] Run: docker exec bridge-270721 stat /var/lib/dpkg/alternatives/iptables
	I1213 12:06:01.251139  658372 oci.go:144] the created container "bridge-270721" has a running status.
	I1213 12:06:01.251173  658372 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/bridge-270721/id_rsa...
	I1213 12:06:01.701097  658372 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-307042/.minikube/machines/bridge-270721/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 12:06:01.724348  658372 cli_runner.go:164] Run: docker container inspect bridge-270721 --format={{.State.Status}}
	I1213 12:06:01.745449  658372 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 12:06:01.745469  658372 kic_runner.go:114] Args: [docker exec --privileged bridge-270721 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 12:06:01.814640  658372 cli_runner.go:164] Run: docker container inspect bridge-270721 --format={{.State.Status}}
	I1213 12:06:01.840474  658372 machine.go:94] provisionDockerMachine start ...
	I1213 12:06:01.840574  658372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-270721
	I1213 12:06:01.863089  658372 main.go:143] libmachine: Using SSH client type: native
	I1213 12:06:01.863450  658372 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33470 <nil> <nil>}
	I1213 12:06:01.863468  658372 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 12:06:01.866364  658372 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 12:06:05.023204  658372 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-270721
	
	I1213 12:06:05.023232  658372 ubuntu.go:182] provisioning hostname "bridge-270721"
	I1213 12:06:05.023346  658372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-270721
	I1213 12:06:05.041369  658372 main.go:143] libmachine: Using SSH client type: native
	I1213 12:06:05.041677  658372 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33470 <nil> <nil>}
	I1213 12:06:05.041693  658372 main.go:143] libmachine: About to run SSH command:
	sudo hostname bridge-270721 && echo "bridge-270721" | sudo tee /etc/hostname
	I1213 12:06:05.215125  658372 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-270721
	
	I1213 12:06:05.215220  658372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-270721
	I1213 12:06:05.235554  658372 main.go:143] libmachine: Using SSH client type: native
	I1213 12:06:05.235906  658372 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33470 <nil> <nil>}
	I1213 12:06:05.235930  658372 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-270721' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-270721/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-270721' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 12:06:05.387168  658372 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 12:06:05.387200  658372 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
	I1213 12:06:05.387224  658372 ubuntu.go:190] setting up certificates
	I1213 12:06:05.387240  658372 provision.go:84] configureAuth start
	I1213 12:06:05.387314  658372 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-270721
	I1213 12:06:05.405007  658372 provision.go:143] copyHostCerts
	I1213 12:06:05.405080  658372 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
	I1213 12:06:05.405097  658372 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
	I1213 12:06:05.405180  658372 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
	I1213 12:06:05.405277  658372 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
	I1213 12:06:05.405287  658372 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
	I1213 12:06:05.405320  658372 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
	I1213 12:06:05.405378  658372 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
	I1213 12:06:05.405387  658372 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
	I1213 12:06:05.405412  658372 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
	I1213 12:06:05.405460  658372 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.bridge-270721 san=[127.0.0.1 192.168.76.2 bridge-270721 localhost minikube]
	I1213 12:06:05.645926  658372 provision.go:177] copyRemoteCerts
	I1213 12:06:05.645994  658372 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 12:06:05.646043  658372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-270721
	I1213 12:06:05.666041  658372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/bridge-270721/id_rsa Username:docker}
	I1213 12:06:05.770671  658372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 12:06:05.788338  658372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 12:06:05.805901  658372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 12:06:05.825285  658372 provision.go:87] duration metric: took 438.024993ms to configureAuth
	I1213 12:06:05.825312  658372 ubuntu.go:206] setting minikube options for container-runtime
	I1213 12:06:05.825508  658372 config.go:182] Loaded profile config "bridge-270721": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 12:06:05.825522  658372 machine.go:97] duration metric: took 3.98502915s to provisionDockerMachine
	I1213 12:06:05.825528  658372 client.go:176] duration metric: took 9.73963954s to LocalClient.Create
	I1213 12:06:05.825548  658372 start.go:167] duration metric: took 9.739716661s to libmachine.API.Create "bridge-270721"
	I1213 12:06:05.825559  658372 start.go:293] postStartSetup for "bridge-270721" (driver="docker")
	I1213 12:06:05.825568  658372 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 12:06:05.825627  658372 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 12:06:05.825672  658372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-270721
	I1213 12:06:05.843443  658372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/bridge-270721/id_rsa Username:docker}
	I1213 12:06:05.957429  658372 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 12:06:05.961688  658372 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 12:06:05.961720  658372 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 12:06:05.961732  658372 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
	I1213 12:06:05.961792  658372 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
	I1213 12:06:05.961928  658372 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
	I1213 12:06:05.962054  658372 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 12:06:05.974747  658372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 12:06:05.996442  658372 start.go:296] duration metric: took 170.8695ms for postStartSetup
	I1213 12:06:05.996839  658372 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-270721
	I1213 12:06:06.022844  658372 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/config.json ...
	I1213 12:06:06.023210  658372 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 12:06:06.023262  658372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-270721
	I1213 12:06:06.040592  658372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/bridge-270721/id_rsa Username:docker}
	I1213 12:06:06.144107  658372 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 12:06:06.149090  658372 start.go:128] duration metric: took 10.066968619s to createHost
	I1213 12:06:06.149114  658372 start.go:83] releasing machines lock for "bridge-270721", held for 10.067106148s
	I1213 12:06:06.149193  658372 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-270721
	I1213 12:06:06.166599  658372 ssh_runner.go:195] Run: cat /version.json
	I1213 12:06:06.166653  658372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-270721
	I1213 12:06:06.166853  658372 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 12:06:06.166927  658372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-270721
	I1213 12:06:06.186210  658372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/bridge-270721/id_rsa Username:docker}
	I1213 12:06:06.197405  658372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/bridge-270721/id_rsa Username:docker}
	I1213 12:06:06.291017  658372 ssh_runner.go:195] Run: systemctl --version
	I1213 12:06:06.388810  658372 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 12:06:06.393166  658372 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 12:06:06.393267  658372 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 12:06:06.422202  658372 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 12:06:06.422228  658372 start.go:496] detecting cgroup driver to use...
	I1213 12:06:06.422261  658372 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 12:06:06.422315  658372 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 12:06:06.437284  658372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 12:06:06.450513  658372 docker.go:218] disabling cri-docker service (if available) ...
	I1213 12:06:06.450589  658372 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 12:06:06.467929  658372 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 12:06:06.487161  658372 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 12:06:06.606548  658372 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 12:06:06.738150  658372 docker.go:234] disabling docker service ...
	I1213 12:06:06.738268  658372 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 12:06:06.760518  658372 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 12:06:06.775412  658372 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 12:06:06.906517  658372 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 12:06:07.023372  658372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 12:06:07.039488  658372 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 12:06:07.055140  658372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 12:06:07.064590  658372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 12:06:07.073944  658372 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 12:06:07.074015  658372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 12:06:07.083143  658372 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 12:06:07.092486  658372 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 12:06:07.101627  658372 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 12:06:07.110954  658372 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 12:06:07.119507  658372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 12:06:07.128590  658372 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 12:06:07.138551  658372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 12:06:07.147829  658372 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 12:06:07.156889  658372 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 12:06:07.164636  658372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:06:07.278191  658372 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 12:06:07.429091  658372 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 12:06:07.429175  658372 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 12:06:07.435552  658372 start.go:564] Will wait 60s for crictl version
	I1213 12:06:07.435647  658372 ssh_runner.go:195] Run: which crictl
	I1213 12:06:07.439723  658372 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 12:06:07.471467  658372 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 12:06:07.471552  658372 ssh_runner.go:195] Run: containerd --version
	I1213 12:06:07.497828  658372 ssh_runner.go:195] Run: containerd --version
	I1213 12:06:07.524994  658372 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1213 12:06:07.528012  658372 cli_runner.go:164] Run: docker network inspect bridge-270721 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 12:06:07.544804  658372 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 12:06:07.548844  658372 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 12:06:07.559858  658372 kubeadm.go:884] updating cluster {Name:bridge-270721 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-270721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 12:06:07.559981  658372 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 12:06:07.560060  658372 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 12:06:07.585964  658372 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 12:06:07.585990  658372 containerd.go:534] Images already preloaded, skipping extraction
	I1213 12:06:07.586048  658372 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 12:06:07.613129  658372 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 12:06:07.613152  658372 cache_images.go:86] Images are preloaded, skipping loading
	I1213 12:06:07.613161  658372 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 containerd true true} ...
	I1213 12:06:07.613258  658372 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-270721 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:bridge-270721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1213 12:06:07.613324  658372 ssh_runner.go:195] Run: sudo crictl info
	I1213 12:06:07.638651  658372 cni.go:84] Creating CNI manager for "bridge"
	I1213 12:06:07.638680  658372 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 12:06:07.638796  658372 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-270721 NodeName:bridge-270721 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 12:06:07.638911  658372 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "bridge-270721"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 12:06:07.638985  658372 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 12:06:07.646796  658372 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 12:06:07.646866  658372 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 12:06:07.654435  658372 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1213 12:06:07.667341  658372 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 12:06:07.680354  658372 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1213 12:06:07.693540  658372 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 12:06:07.698333  658372 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 12:06:07.708462  658372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:06:07.824363  658372 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 12:06:07.842941  658372 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721 for IP: 192.168.76.2
	I1213 12:06:07.842960  658372 certs.go:195] generating shared ca certs ...
	I1213 12:06:07.842976  658372 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:06:07.843121  658372 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
	I1213 12:06:07.843161  658372 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
	I1213 12:06:07.843168  658372 certs.go:257] generating profile certs ...
	I1213 12:06:07.843237  658372 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/client.key
	I1213 12:06:07.843247  658372 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/client.crt with IP's: []
	I1213 12:06:08.169227  658372 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/client.crt ...
	I1213 12:06:08.169305  658372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/client.crt: {Name:mk39e68ee7068cd1fd8da0d1da61bff56d863953 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:06:08.169517  658372 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/client.key ...
	I1213 12:06:08.169554  658372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/client.key: {Name:mkd38f96380e17818d04b8fde9089cd995c119e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:06:08.169675  658372 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/apiserver.key.62fd7759
	I1213 12:06:08.169716  658372 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/apiserver.crt.62fd7759 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1213 12:06:08.512019  658372 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/apiserver.crt.62fd7759 ...
	I1213 12:06:08.512051  658372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/apiserver.crt.62fd7759: {Name:mk92394f11208fac083f6762f3c6925b4edec4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:06:08.512249  658372 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/apiserver.key.62fd7759 ...
	I1213 12:06:08.512264  658372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/apiserver.key.62fd7759: {Name:mk01e7665defc3e0bad454478607b6b55a3416cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:06:08.512351  658372 certs.go:382] copying /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/apiserver.crt.62fd7759 -> /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/apiserver.crt
	I1213 12:06:08.512436  658372 certs.go:386] copying /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/apiserver.key.62fd7759 -> /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/apiserver.key
	I1213 12:06:08.512497  658372 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/proxy-client.key
	I1213 12:06:08.512514  658372 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/proxy-client.crt with IP's: []
	I1213 12:06:08.573608  658372 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/proxy-client.crt ...
	I1213 12:06:08.573639  658372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/proxy-client.crt: {Name:mk0ac2e62d3e6ea07b0f09a63b1969adca2e4744 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:06:08.573815  658372 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/proxy-client.key ...
	I1213 12:06:08.573829  658372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/proxy-client.key: {Name:mk6a7d13ca472344824e03249cebada93af0099a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:06:08.574018  658372 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
	W1213 12:06:08.574065  658372 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
	I1213 12:06:08.574078  658372 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 12:06:08.574111  658372 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
	I1213 12:06:08.574142  658372 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
	I1213 12:06:08.574170  658372 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
	I1213 12:06:08.574220  658372 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
	I1213 12:06:08.574851  658372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 12:06:08.594135  658372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 12:06:08.623734  658372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 12:06:08.643477  658372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 12:06:08.665207  658372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 12:06:08.684427  658372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 12:06:08.703012  658372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 12:06:08.721529  658372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/bridge-270721/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 12:06:08.743563  658372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
	I1213 12:06:08.763279  658372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
	I1213 12:06:08.782250  658372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 12:06:08.800714  658372 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 12:06:08.814785  658372 ssh_runner.go:195] Run: openssl version
	I1213 12:06:08.821581  658372 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
	I1213 12:06:08.829821  658372 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
	I1213 12:06:08.838011  658372 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
	I1213 12:06:08.842368  658372 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
	I1213 12:06:08.842435  658372 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
	I1213 12:06:08.885822  658372 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 12:06:08.893394  658372 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/308915.pem /etc/ssl/certs/51391683.0
	I1213 12:06:08.901009  658372 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
	I1213 12:06:08.908895  658372 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
	I1213 12:06:08.921183  658372 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
	I1213 12:06:08.927407  658372 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
	I1213 12:06:08.927529  658372 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
	I1213 12:06:08.971300  658372 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 12:06:08.979150  658372 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3089152.pem /etc/ssl/certs/3ec20f2e.0
	I1213 12:06:08.986814  658372 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:06:08.994382  658372 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 12:06:09.003617  658372 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:06:09.009037  658372 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:06:09.009182  658372 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:06:09.056333  658372 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 12:06:09.064259  658372 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 12:06:09.071980  658372 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 12:06:09.075784  658372 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 12:06:09.075888  658372 kubeadm.go:401] StartCluster: {Name:bridge-270721 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-270721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:06:09.075980  658372 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 12:06:09.076056  658372 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 12:06:09.102161  658372 cri.go:89] found id: ""
	I1213 12:06:09.102284  658372 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 12:06:09.110245  658372 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 12:06:09.118433  658372 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 12:06:09.118558  658372 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 12:06:09.126675  658372 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 12:06:09.126722  658372 kubeadm.go:158] found existing configuration files:
	
	I1213 12:06:09.126774  658372 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 12:06:09.134778  658372 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 12:06:09.134883  658372 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 12:06:09.142324  658372 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 12:06:09.150240  658372 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 12:06:09.150351  658372 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 12:06:09.157751  658372 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 12:06:09.165480  658372 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 12:06:09.165580  658372 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 12:06:09.173238  658372 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 12:06:09.181428  658372 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 12:06:09.181525  658372 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 12:06:09.190246  658372 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 12:06:09.250801  658372 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1213 12:06:09.251116  658372 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 12:06:09.321946  658372 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 12:06:25.158029  658372 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 12:06:25.158089  658372 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 12:06:25.158182  658372 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 12:06:25.158242  658372 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 12:06:25.158281  658372 kubeadm.go:319] OS: Linux
	I1213 12:06:25.158330  658372 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 12:06:25.158384  658372 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 12:06:25.158435  658372 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 12:06:25.158487  658372 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 12:06:25.158549  658372 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 12:06:25.158616  658372 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 12:06:25.158667  658372 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 12:06:25.158750  658372 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 12:06:25.158802  658372 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 12:06:25.158880  658372 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 12:06:25.158979  658372 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 12:06:25.159073  658372 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 12:06:25.159140  658372 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 12:06:25.164196  658372 out.go:252]   - Generating certificates and keys ...
	I1213 12:06:25.164297  658372 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 12:06:25.164369  658372 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 12:06:25.164441  658372 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 12:06:25.164501  658372 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 12:06:25.164565  658372 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 12:06:25.164618  658372 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 12:06:25.164675  658372 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 12:06:25.164804  658372 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [bridge-270721 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 12:06:25.164864  658372 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 12:06:25.164991  658372 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [bridge-270721 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 12:06:25.165063  658372 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 12:06:25.165131  658372 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 12:06:25.165178  658372 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 12:06:25.165237  658372 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 12:06:25.165291  658372 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 12:06:25.165351  658372 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 12:06:25.165409  658372 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 12:06:25.165475  658372 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 12:06:25.165533  658372 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 12:06:25.165618  658372 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 12:06:25.165690  658372 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 12:06:25.168672  658372 out.go:252]   - Booting up control plane ...
	I1213 12:06:25.168788  658372 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 12:06:25.168880  658372 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 12:06:25.168959  658372 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 12:06:25.169086  658372 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 12:06:25.169205  658372 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 12:06:25.169311  658372 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 12:06:25.169395  658372 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 12:06:25.169434  658372 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 12:06:25.169564  658372 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 12:06:25.169668  658372 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 12:06:25.169727  658372 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501923969s
	I1213 12:06:25.169826  658372 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 12:06:25.169907  658372 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1213 12:06:25.169996  658372 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 12:06:25.170082  658372 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 12:06:25.170158  658372 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.723974527s
	I1213 12:06:25.170225  658372 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.047841032s
	I1213 12:06:25.170293  658372 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.001938973s
	I1213 12:06:25.170398  658372 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 12:06:25.170523  658372 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 12:06:25.170581  658372 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 12:06:25.170879  658372 kubeadm.go:319] [mark-control-plane] Marking the node bridge-270721 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 12:06:25.170947  658372 kubeadm.go:319] [bootstrap-token] Using token: zaustm.fszhffm37d079ybe
	I1213 12:06:25.173846  658372 out.go:252]   - Configuring RBAC rules ...
	I1213 12:06:25.173959  658372 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 12:06:25.174050  658372 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 12:06:25.174194  658372 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 12:06:25.174325  658372 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 12:06:25.174443  658372 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 12:06:25.174532  658372 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 12:06:25.174650  658372 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 12:06:25.174770  658372 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 12:06:25.174825  658372 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 12:06:25.174833  658372 kubeadm.go:319] 
	I1213 12:06:25.174894  658372 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 12:06:25.174903  658372 kubeadm.go:319] 
	I1213 12:06:25.174981  658372 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 12:06:25.174988  658372 kubeadm.go:319] 
	I1213 12:06:25.175013  658372 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 12:06:25.175076  658372 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 12:06:25.175129  658372 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 12:06:25.175136  658372 kubeadm.go:319] 
	I1213 12:06:25.175190  658372 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 12:06:25.175197  658372 kubeadm.go:319] 
	I1213 12:06:25.175245  658372 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 12:06:25.175252  658372 kubeadm.go:319] 
	I1213 12:06:25.175305  658372 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 12:06:25.175384  658372 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 12:06:25.175455  658372 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 12:06:25.175463  658372 kubeadm.go:319] 
	I1213 12:06:25.175548  658372 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 12:06:25.175629  658372 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 12:06:25.175636  658372 kubeadm.go:319] 
	I1213 12:06:25.175721  658372 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token zaustm.fszhffm37d079ybe \
	I1213 12:06:25.175842  658372 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2b5aae63f59669f0b4e3ed658fbdddeef7a3996ea2c8f22710210607dc196205 \
	I1213 12:06:25.175867  658372 kubeadm.go:319] 	--control-plane 
	I1213 12:06:25.175874  658372 kubeadm.go:319] 
	I1213 12:06:25.175959  658372 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 12:06:25.175966  658372 kubeadm.go:319] 
	I1213 12:06:25.176048  658372 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token zaustm.fszhffm37d079ybe \
	I1213 12:06:25.176171  658372 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2b5aae63f59669f0b4e3ed658fbdddeef7a3996ea2c8f22710210607dc196205 
	I1213 12:06:25.176182  658372 cni.go:84] Creating CNI manager for "bridge"
	I1213 12:06:25.179236  658372 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 12:06:25.182131  658372 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 12:06:25.191403  658372 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1213 12:06:25.205571  658372 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 12:06:25.205641  658372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:06:25.205696  658372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-270721 minikube.k8s.io/updated_at=2025_12_13T12_06_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b minikube.k8s.io/name=bridge-270721 minikube.k8s.io/primary=true
	I1213 12:06:25.369050  658372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:06:25.369139  658372 ops.go:34] apiserver oom_adj: -16
	I1213 12:06:25.869529  658372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:06:26.369986  658372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:06:26.869681  658372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:06:27.369972  658372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:06:27.870078  658372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:06:28.369639  658372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:06:28.869807  658372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:06:28.992145  658372 kubeadm.go:1114] duration metric: took 3.786564599s to wait for elevateKubeSystemPrivileges
	I1213 12:06:28.992174  658372 kubeadm.go:403] duration metric: took 19.91629011s to StartCluster
	I1213 12:06:28.992191  658372 settings.go:142] acquiring lock: {Name:mk079e9a25ebbc2c8fbae42d4c6ed096a652c00b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:06:28.992260  658372 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 12:06:28.993194  658372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/kubeconfig: {Name:mk9039d1d506122ff52224469c478e739c5ebabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:06:28.993397  658372 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 12:06:28.993488  658372 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 12:06:28.993749  658372 config.go:182] Loaded profile config "bridge-270721": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 12:06:28.993786  658372 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 12:06:28.993858  658372 addons.go:70] Setting storage-provisioner=true in profile "bridge-270721"
	I1213 12:06:28.993872  658372 addons.go:239] Setting addon storage-provisioner=true in "bridge-270721"
	I1213 12:06:28.993893  658372 host.go:66] Checking if "bridge-270721" exists ...
	I1213 12:06:28.994863  658372 cli_runner.go:164] Run: docker container inspect bridge-270721 --format={{.State.Status}}
	I1213 12:06:28.994916  658372 addons.go:70] Setting default-storageclass=true in profile "bridge-270721"
	I1213 12:06:28.994936  658372 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "bridge-270721"
	I1213 12:06:28.995242  658372 cli_runner.go:164] Run: docker container inspect bridge-270721 --format={{.State.Status}}
	I1213 12:06:28.996829  658372 out.go:179] * Verifying Kubernetes components...
	I1213 12:06:29.000447  658372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:06:29.032200  658372 addons.go:239] Setting addon default-storageclass=true in "bridge-270721"
	I1213 12:06:29.032239  658372 host.go:66] Checking if "bridge-270721" exists ...
	I1213 12:06:29.032660  658372 cli_runner.go:164] Run: docker container inspect bridge-270721 --format={{.State.Status}}
	I1213 12:06:29.043235  658372 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 12:06:29.046127  658372 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:06:29.046150  658372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 12:06:29.046216  658372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-270721
	I1213 12:06:29.074671  658372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/bridge-270721/id_rsa Username:docker}
	I1213 12:06:29.079252  658372 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 12:06:29.079275  658372 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 12:06:29.079338  658372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-270721
	I1213 12:06:29.110729  658372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/bridge-270721/id_rsa Username:docker}
	I1213 12:06:29.279066  658372 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 12:06:29.279199  658372 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 12:06:29.304166  658372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:06:29.419214  658372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 12:06:29.832504  658372 node_ready.go:35] waiting up to 15m0s for node "bridge-270721" to be "Ready" ...
	I1213 12:06:29.832897  658372 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1213 12:06:29.859307  658372 node_ready.go:49] node "bridge-270721" is "Ready"
	I1213 12:06:29.859337  658372 node_ready.go:38] duration metric: took 26.753932ms for node "bridge-270721" to be "Ready" ...
	I1213 12:06:29.859350  658372 api_server.go:52] waiting for apiserver process to appear ...
	I1213 12:06:29.859421  658372 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:30.246925  658372 api_server.go:72] duration metric: took 1.253495635s to wait for apiserver process to appear ...
	I1213 12:06:30.246948  658372 api_server.go:88] waiting for apiserver healthz status ...
	I1213 12:06:30.246965  658372 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 12:06:30.259665  658372 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1213 12:06:30.260779  658372 api_server.go:141] control plane version: v1.34.2
	I1213 12:06:30.260845  658372 api_server.go:131] duration metric: took 13.888791ms to wait for apiserver health ...
	I1213 12:06:30.260870  658372 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 12:06:30.264961  658372 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1213 12:06:30.265337  658372 system_pods.go:59] 8 kube-system pods found
	I1213 12:06:30.265370  658372 system_pods.go:61] "coredns-66bc5c9577-2kdg8" [75411d9b-49df-4169-ba56-348a45e14e42] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:06:30.265380  658372 system_pods.go:61] "coredns-66bc5c9577-44j6f" [37d63179-4cfa-44e4-aafd-f017cb1ddc78] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:06:30.265402  658372 system_pods.go:61] "etcd-bridge-270721" [720422f4-83e1-4aa2-bb06-d62b493d4cff] Running
	I1213 12:06:30.265409  658372 system_pods.go:61] "kube-apiserver-bridge-270721" [79e6fbb0-3347-4843-8a66-29a51e7ecf08] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 12:06:30.265416  658372 system_pods.go:61] "kube-controller-manager-bridge-270721" [35dc65a9-bfa9-4293-bf4e-835cc959eed5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 12:06:30.265427  658372 system_pods.go:61] "kube-proxy-s8htx" [b1eb7355-0a37-4fe6-9545-1104c88c2dc8] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 12:06:30.265437  658372 system_pods.go:61] "kube-scheduler-bridge-270721" [2887d732-8c5e-4c65-b81c-0bbe73ab8935] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 12:06:30.265447  658372 system_pods.go:61] "storage-provisioner" [aba79b13-85c2-42e6-8d14-11a0bbf1cb58] Pending
	I1213 12:06:30.265453  658372 system_pods.go:74] duration metric: took 4.564647ms to wait for pod list to return data ...
	I1213 12:06:30.265469  658372 default_sa.go:34] waiting for default service account to be created ...
	I1213 12:06:30.267863  658372 addons.go:530] duration metric: took 1.274063874s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1213 12:06:30.270974  658372 default_sa.go:45] found service account: "default"
	I1213 12:06:30.270997  658372 default_sa.go:55] duration metric: took 5.517009ms for default service account to be created ...
	I1213 12:06:30.271049  658372 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 12:06:30.276286  658372 system_pods.go:86] 8 kube-system pods found
	I1213 12:06:30.276318  658372 system_pods.go:89] "coredns-66bc5c9577-2kdg8" [75411d9b-49df-4169-ba56-348a45e14e42] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:06:30.276327  658372 system_pods.go:89] "coredns-66bc5c9577-44j6f" [37d63179-4cfa-44e4-aafd-f017cb1ddc78] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:06:30.276333  658372 system_pods.go:89] "etcd-bridge-270721" [720422f4-83e1-4aa2-bb06-d62b493d4cff] Running
	I1213 12:06:30.276341  658372 system_pods.go:89] "kube-apiserver-bridge-270721" [79e6fbb0-3347-4843-8a66-29a51e7ecf08] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 12:06:30.276347  658372 system_pods.go:89] "kube-controller-manager-bridge-270721" [35dc65a9-bfa9-4293-bf4e-835cc959eed5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 12:06:30.276353  658372 system_pods.go:89] "kube-proxy-s8htx" [b1eb7355-0a37-4fe6-9545-1104c88c2dc8] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 12:06:30.276359  658372 system_pods.go:89] "kube-scheduler-bridge-270721" [2887d732-8c5e-4c65-b81c-0bbe73ab8935] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 12:06:30.276364  658372 system_pods.go:89] "storage-provisioner" [aba79b13-85c2-42e6-8d14-11a0bbf1cb58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 12:06:30.276390  658372 retry.go:31] will retry after 276.925654ms: missing components: kube-dns, kube-proxy
	I1213 12:06:30.337874  658372 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-270721" context rescaled to 1 replicas
	I1213 12:06:30.557817  658372 system_pods.go:86] 8 kube-system pods found
	I1213 12:06:30.557855  658372 system_pods.go:89] "coredns-66bc5c9577-2kdg8" [75411d9b-49df-4169-ba56-348a45e14e42] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:06:30.557871  658372 system_pods.go:89] "coredns-66bc5c9577-44j6f" [37d63179-4cfa-44e4-aafd-f017cb1ddc78] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:06:30.557877  658372 system_pods.go:89] "etcd-bridge-270721" [720422f4-83e1-4aa2-bb06-d62b493d4cff] Running
	I1213 12:06:30.557884  658372 system_pods.go:89] "kube-apiserver-bridge-270721" [79e6fbb0-3347-4843-8a66-29a51e7ecf08] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 12:06:30.557896  658372 system_pods.go:89] "kube-controller-manager-bridge-270721" [35dc65a9-bfa9-4293-bf4e-835cc959eed5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 12:06:30.557911  658372 system_pods.go:89] "kube-proxy-s8htx" [b1eb7355-0a37-4fe6-9545-1104c88c2dc8] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 12:06:30.557921  658372 system_pods.go:89] "kube-scheduler-bridge-270721" [2887d732-8c5e-4c65-b81c-0bbe73ab8935] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 12:06:30.557928  658372 system_pods.go:89] "storage-provisioner" [aba79b13-85c2-42e6-8d14-11a0bbf1cb58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 12:06:30.557955  658372 retry.go:31] will retry after 289.154995ms: missing components: kube-dns, kube-proxy
	I1213 12:06:30.852894  658372 system_pods.go:86] 8 kube-system pods found
	I1213 12:06:30.852981  658372 system_pods.go:89] "coredns-66bc5c9577-2kdg8" [75411d9b-49df-4169-ba56-348a45e14e42] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:06:30.853005  658372 system_pods.go:89] "coredns-66bc5c9577-44j6f" [37d63179-4cfa-44e4-aafd-f017cb1ddc78] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:06:30.853047  658372 system_pods.go:89] "etcd-bridge-270721" [720422f4-83e1-4aa2-bb06-d62b493d4cff] Running
	I1213 12:06:30.853072  658372 system_pods.go:89] "kube-apiserver-bridge-270721" [79e6fbb0-3347-4843-8a66-29a51e7ecf08] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 12:06:30.853094  658372 system_pods.go:89] "kube-controller-manager-bridge-270721" [35dc65a9-bfa9-4293-bf4e-835cc959eed5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 12:06:30.853127  658372 system_pods.go:89] "kube-proxy-s8htx" [b1eb7355-0a37-4fe6-9545-1104c88c2dc8] Running
	I1213 12:06:30.853152  658372 system_pods.go:89] "kube-scheduler-bridge-270721" [2887d732-8c5e-4c65-b81c-0bbe73ab8935] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 12:06:30.853172  658372 system_pods.go:89] "storage-provisioner" [aba79b13-85c2-42e6-8d14-11a0bbf1cb58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 12:06:30.853214  658372 retry.go:31] will retry after 293.106015ms: missing components: kube-dns
	I1213 12:06:31.150981  658372 system_pods.go:86] 8 kube-system pods found
	I1213 12:06:31.151019  658372 system_pods.go:89] "coredns-66bc5c9577-2kdg8" [75411d9b-49df-4169-ba56-348a45e14e42] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:06:31.151029  658372 system_pods.go:89] "coredns-66bc5c9577-44j6f" [37d63179-4cfa-44e4-aafd-f017cb1ddc78] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:06:31.151034  658372 system_pods.go:89] "etcd-bridge-270721" [720422f4-83e1-4aa2-bb06-d62b493d4cff] Running
	I1213 12:06:31.151048  658372 system_pods.go:89] "kube-apiserver-bridge-270721" [79e6fbb0-3347-4843-8a66-29a51e7ecf08] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 12:06:31.151054  658372 system_pods.go:89] "kube-controller-manager-bridge-270721" [35dc65a9-bfa9-4293-bf4e-835cc959eed5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 12:06:31.151064  658372 system_pods.go:89] "kube-proxy-s8htx" [b1eb7355-0a37-4fe6-9545-1104c88c2dc8] Running
	I1213 12:06:31.151069  658372 system_pods.go:89] "kube-scheduler-bridge-270721" [2887d732-8c5e-4c65-b81c-0bbe73ab8935] Running
	I1213 12:06:31.151077  658372 system_pods.go:89] "storage-provisioner" [aba79b13-85c2-42e6-8d14-11a0bbf1cb58] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 12:06:31.151093  658372 retry.go:31] will retry after 550.778705ms: missing components: kube-dns
	I1213 12:06:31.715911  658372 system_pods.go:86] 8 kube-system pods found
	I1213 12:06:31.715956  658372 system_pods.go:89] "coredns-66bc5c9577-2kdg8" [75411d9b-49df-4169-ba56-348a45e14e42] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:06:31.715967  658372 system_pods.go:89] "coredns-66bc5c9577-44j6f" [37d63179-4cfa-44e4-aafd-f017cb1ddc78] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:06:31.715972  658372 system_pods.go:89] "etcd-bridge-270721" [720422f4-83e1-4aa2-bb06-d62b493d4cff] Running
	I1213 12:06:31.715979  658372 system_pods.go:89] "kube-apiserver-bridge-270721" [79e6fbb0-3347-4843-8a66-29a51e7ecf08] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 12:06:31.716001  658372 system_pods.go:89] "kube-controller-manager-bridge-270721" [35dc65a9-bfa9-4293-bf4e-835cc959eed5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 12:06:31.716019  658372 system_pods.go:89] "kube-proxy-s8htx" [b1eb7355-0a37-4fe6-9545-1104c88c2dc8] Running
	I1213 12:06:31.716024  658372 system_pods.go:89] "kube-scheduler-bridge-270721" [2887d732-8c5e-4c65-b81c-0bbe73ab8935] Running
	I1213 12:06:31.716034  658372 system_pods.go:89] "storage-provisioner" [aba79b13-85c2-42e6-8d14-11a0bbf1cb58] Running
	I1213 12:06:31.716041  658372 system_pods.go:126] duration metric: took 1.444985557s to wait for k8s-apps to be running ...
	I1213 12:06:31.716054  658372 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 12:06:31.716119  658372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 12:06:31.732935  658372 system_svc.go:56] duration metric: took 16.869627ms WaitForService to wait for kubelet
	I1213 12:06:31.732967  658372 kubeadm.go:587] duration metric: took 2.739548223s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 12:06:31.732987  658372 node_conditions.go:102] verifying NodePressure condition ...
	I1213 12:06:31.741285  658372 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 12:06:31.741320  658372 node_conditions.go:123] node cpu capacity is 2
	I1213 12:06:31.741343  658372 node_conditions.go:105] duration metric: took 8.350538ms to run NodePressure ...
	I1213 12:06:31.741356  658372 start.go:242] waiting for startup goroutines ...
	I1213 12:06:31.741363  658372 start.go:247] waiting for cluster config update ...
	I1213 12:06:31.741378  658372 start.go:256] writing updated cluster config ...
	I1213 12:06:31.741680  658372 ssh_runner.go:195] Run: rm -f paused
	I1213 12:06:31.750196  658372 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 12:06:31.754482  658372 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2kdg8" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 12:06:33.760445  658372 pod_ready.go:104] pod "coredns-66bc5c9577-2kdg8" is not "Ready", error: <nil>
	W1213 12:06:36.260155  658372 pod_ready.go:104] pod "coredns-66bc5c9577-2kdg8" is not "Ready", error: <nil>
	W1213 12:06:38.260588  658372 pod_ready.go:104] pod "coredns-66bc5c9577-2kdg8" is not "Ready", error: <nil>
	W1213 12:06:40.260635  658372 pod_ready.go:104] pod "coredns-66bc5c9577-2kdg8" is not "Ready", error: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.850948040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.850964713Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851002933Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851021788Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851032094Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851043467Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851052681Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851068796Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851086577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851121301Z" level=info msg="Connect containerd service"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851401698Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.851964747Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.867726494Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.868237695Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.868561226Z" level=info msg="Start subscribing containerd event"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.868632505Z" level=info msg="Start recovering state"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.889278015Z" level=info msg="Start event monitor"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.889343254Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.889355102Z" level=info msg="Start streaming server"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.889372054Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.889392994Z" level=info msg="runtime interface starting up..."
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.889400510Z" level=info msg="starting plugins..."
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.889437261Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 11:46:53 no-preload-333352 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 11:46:53 no-preload-333352 containerd[557]: time="2025-12-13T11:46:53.891303551Z" level=info msg="containerd successfully booted in 0.061815s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:44.548250   10392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:44.548697   10392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:44.551049   10392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:44.551431   10392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:44.553562   10392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +12.907693] overlayfs: idmapped layers are currently not supported
	[Dec13 09:25] overlayfs: idmapped layers are currently not supported
	[ +26.192425] overlayfs: idmapped layers are currently not supported
	[Dec13 09:26] overlayfs: idmapped layers are currently not supported
	[ +25.729788] overlayfs: idmapped layers are currently not supported
	[Dec13 09:27] overlayfs: idmapped layers are currently not supported
	[Dec13 09:28] overlayfs: idmapped layers are currently not supported
	[Dec13 09:31] overlayfs: idmapped layers are currently not supported
	[Dec13 09:32] overlayfs: idmapped layers are currently not supported
	[ +17.979093] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Dec13 09:43] overlayfs: idmapped layers are currently not supported
	[Dec13 09:45] overlayfs: idmapped layers are currently not supported
	[ +25.885102] overlayfs: idmapped layers are currently not supported
	[Dec13 09:46] overlayfs: idmapped layers are currently not supported
	[ +22.078149] overlayfs: idmapped layers are currently not supported
	[Dec13 09:47] overlayfs: idmapped layers are currently not supported
	[Dec13 09:48] overlayfs: idmapped layers are currently not supported
	[Dec13 09:49] overlayfs: idmapped layers are currently not supported
	[Dec13 09:51] overlayfs: idmapped layers are currently not supported
	[ +17.043564] overlayfs: idmapped layers are currently not supported
	[Dec13 09:52] overlayfs: idmapped layers are currently not supported
	[Dec13 09:53] overlayfs: idmapped layers are currently not supported
	[Dec13 10:12] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:19] hrtimer: interrupt took 21247146 ns
	
	
	==> kernel <==
	 12:06:44 up  4:49,  0 user,  load average: 1.46, 1.69, 1.54
	Linux no-preload-333352 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 12:06:41 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:06:41 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1580.
	Dec 13 12:06:41 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:06:41 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:06:41 no-preload-333352 kubelet[10255]: E1213 12:06:41.963084   10255 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:06:41 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:06:41 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:06:42 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1581.
	Dec 13 12:06:42 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:06:42 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:06:42 no-preload-333352 kubelet[10261]: E1213 12:06:42.712164   10261 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:06:42 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:06:42 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:06:43 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1582.
	Dec 13 12:06:43 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:06:43 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:06:43 no-preload-333352 kubelet[10273]: E1213 12:06:43.464920   10273 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:06:43 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:06:43 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:06:44 no-preload-333352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1583.
	Dec 13 12:06:44 no-preload-333352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:06:44 no-preload-333352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:06:44 no-preload-333352 kubelet[10308]: E1213 12:06:44.239587   10308 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:06:44 no-preload-333352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:06:44 no-preload-333352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-333352 -n no-preload-333352
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-333352 -n no-preload-333352: exit status 2 (422.776488ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-333352" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (284.49s)
E1213 12:07:50.852572  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:07:57.866045  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:07:57.872492  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:07:57.884110  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:07:57.905516  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:07:57.947022  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:07:58.028460  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:07:58.190251  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:07:58.512305  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:07:59.154535  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:08:00.435982  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:08:02.998792  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (345/417)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.78
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 4.03
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.43
18 TestDownloadOnly/v1.34.2/DeleteAll 0.26
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 4.14
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.22
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.6
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
36 TestAddons/Setup 143.2
38 TestAddons/serial/Volcano 40.86
40 TestAddons/serial/GCPAuth/Namespaces 0.18
41 TestAddons/serial/GCPAuth/FakeCredentials 9.95
44 TestAddons/parallel/Registry 17.52
45 TestAddons/parallel/RegistryCreds 0.78
46 TestAddons/parallel/Ingress 16.8
47 TestAddons/parallel/InspektorGadget 11.76
48 TestAddons/parallel/MetricsServer 5.84
50 TestAddons/parallel/CSI 54.32
51 TestAddons/parallel/Headlamp 11.31
52 TestAddons/parallel/CloudSpanner 5.65
53 TestAddons/parallel/LocalPath 51.35
54 TestAddons/parallel/NvidiaDevicePlugin 5.66
55 TestAddons/parallel/Yakd 11.89
57 TestAddons/StoppedEnableDisable 12.37
58 TestCertOptions 39.05
59 TestCertExpiration 230.3
61 TestForceSystemdFlag 34.35
62 TestForceSystemdEnv 37.75
63 TestDockerEnvContainerd 51.54
67 TestErrorSpam/setup 32.6
68 TestErrorSpam/start 0.8
69 TestErrorSpam/status 1.23
70 TestErrorSpam/pause 1.75
71 TestErrorSpam/unpause 1.85
72 TestErrorSpam/stop 1.67
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 49.19
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 7.64
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.09
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.48
84 TestFunctional/serial/CacheCmd/cache/add_local 1.25
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.97
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.16
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
92 TestFunctional/serial/ExtraConfig 42.78
93 TestFunctional/serial/ComponentHealth 0.11
94 TestFunctional/serial/LogsCmd 1.44
95 TestFunctional/serial/LogsFileCmd 1.44
96 TestFunctional/serial/InvalidService 4.69
98 TestFunctional/parallel/ConfigCmd 0.48
99 TestFunctional/parallel/DashboardCmd 5.87
100 TestFunctional/parallel/DryRun 0.48
101 TestFunctional/parallel/InternationalLanguage 0.25
102 TestFunctional/parallel/StatusCmd 1.38
106 TestFunctional/parallel/ServiceCmdConnect 7.63
107 TestFunctional/parallel/AddonsCmd 0.14
108 TestFunctional/parallel/PersistentVolumeClaim 25.03
110 TestFunctional/parallel/SSHCmd 0.71
111 TestFunctional/parallel/CpCmd 2.48
113 TestFunctional/parallel/FileSync 0.36
114 TestFunctional/parallel/CertSync 2.33
118 TestFunctional/parallel/NodeLabels 0.13
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.81
122 TestFunctional/parallel/License 0.35
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.48
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/ServiceCmd/DeployApp 7.21
135 TestFunctional/parallel/ServiceCmd/List 0.51
136 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
137 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
138 TestFunctional/parallel/ServiceCmd/Format 0.38
139 TestFunctional/parallel/ServiceCmd/URL 0.56
140 TestFunctional/parallel/ProfileCmd/profile_not_create 0.58
141 TestFunctional/parallel/MountCmd/any-port 9.91
142 TestFunctional/parallel/ProfileCmd/profile_list 0.54
143 TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
144 TestFunctional/parallel/Version/short 0.08
145 TestFunctional/parallel/Version/components 1.43
146 TestFunctional/parallel/MountCmd/specific-port 2.41
147 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
148 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
149 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
150 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
151 TestFunctional/parallel/ImageCommands/ImageBuild 3.78
152 TestFunctional/parallel/ImageCommands/Setup 0.65
153 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.51
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.74
155 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.54
156 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
157 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
158 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
159 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.79
160 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.46
161 TestFunctional/parallel/ImageCommands/ImageRemove 0.61
162 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.7
163 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.46
164 TestFunctional/delete_echo-server_images 0.05
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.26
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.06
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.32
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.9
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.92
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.16
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.46
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.45
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.2
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.14
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.75
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.39
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.29
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.7
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.57
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.6
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.41
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.39
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.4
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 2.11
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.3
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.05
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.5
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.23
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.23
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.22
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.24
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.53
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.26
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.15
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.09
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.31
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.33
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.5
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.71
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.41
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.15
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.15
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.14
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 154.59
265 TestMultiControlPlane/serial/DeployApp 7.2
266 TestMultiControlPlane/serial/PingHostFromPods 1.65
267 TestMultiControlPlane/serial/AddWorkerNode 31.55
268 TestMultiControlPlane/serial/NodeLabels 0.11
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.13
270 TestMultiControlPlane/serial/CopyFile 20.36
271 TestMultiControlPlane/serial/StopSecondaryNode 13.06
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.87
273 TestMultiControlPlane/serial/RestartSecondaryNode 13.13
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.38
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 102.39
276 TestMultiControlPlane/serial/DeleteSecondaryNode 11.53
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.84
278 TestMultiControlPlane/serial/StopCluster 36.31
279 TestMultiControlPlane/serial/RestartCluster 68.35
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.82
281 TestMultiControlPlane/serial/AddSecondaryNode 53.96
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.1
287 TestJSONOutput/start/Command 55.59
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
293 TestJSONOutput/pause/Command 0.73
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
299 TestJSONOutput/unpause/Command 0.64
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.98
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.24
312 TestKicCustomNetwork/create_custom_network 39.76
313 TestKicCustomNetwork/use_default_bridge_network 35.22
314 TestKicExistingNetwork 35.7
315 TestKicCustomSubnet 38.59
316 TestKicStaticIP 35.66
317 TestMainNoArgs 0.06
318 TestMinikubeProfile 71.96
321 TestMountStart/serial/StartWithMountFirst 8.45
322 TestMountStart/serial/VerifyMountFirst 0.3
323 TestMountStart/serial/StartWithMountSecond 8.63
324 TestMountStart/serial/VerifyMountSecond 0.29
325 TestMountStart/serial/DeleteFirst 1.72
326 TestMountStart/serial/VerifyMountPostDelete 0.29
327 TestMountStart/serial/Stop 1.29
328 TestMountStart/serial/RestartStopped 7.71
329 TestMountStart/serial/VerifyMountPostStop 0.28
332 TestMultiNode/serial/FreshStart2Nodes 81.33
333 TestMultiNode/serial/DeployApp2Nodes 6.55
334 TestMultiNode/serial/PingHostFrom2Pods 1.01
335 TestMultiNode/serial/AddNode 29.82
336 TestMultiNode/serial/MultiNodeLabels 0.1
337 TestMultiNode/serial/ProfileList 0.74
338 TestMultiNode/serial/CopyFile 10.92
339 TestMultiNode/serial/StopNode 2.42
340 TestMultiNode/serial/StartAfterStop 7.88
341 TestMultiNode/serial/RestartKeepsNodes 81.98
342 TestMultiNode/serial/DeleteNode 5.7
343 TestMultiNode/serial/StopMultiNode 24.13
344 TestMultiNode/serial/RestartMultiNode 48.39
345 TestMultiNode/serial/ValidateNameConflict 40.68
350 TestPreload 118.59
352 TestScheduledStopUnix 109.16
355 TestInsufficientStorage 12.45
356 TestRunningBinaryUpgrade 310.49
359 TestMissingContainerUpgrade 134.32
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
362 TestNoKubernetes/serial/StartWithK8s 44.24
363 TestNoKubernetes/serial/StartWithStopK8s 18.15
364 TestNoKubernetes/serial/Start 7.57
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
367 TestNoKubernetes/serial/ProfileList 0.7
368 TestNoKubernetes/serial/Stop 1.29
369 TestNoKubernetes/serial/StartNoArgs 6.37
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
371 TestStoppedBinaryUpgrade/Setup 0.99
372 TestStoppedBinaryUpgrade/Upgrade 305.33
373 TestStoppedBinaryUpgrade/MinikubeLogs 2.24
382 TestPause/serial/Start 56.04
383 TestPause/serial/SecondStartNoReconfiguration 6.36
384 TestPause/serial/Pause 0.78
385 TestPause/serial/VerifyStatus 0.32
386 TestPause/serial/Unpause 0.66
387 TestPause/serial/PauseAgain 0.88
388 TestPause/serial/DeletePaused 2.58
389 TestPause/serial/VerifyDeletedResources 0.39
397 TestNetworkPlugins/group/false 3.81
402 TestStartStop/group/old-k8s-version/serial/FirstStart 58.44
403 TestStartStop/group/old-k8s-version/serial/DeployApp 10.39
404 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.22
405 TestStartStop/group/old-k8s-version/serial/Stop 12.17
406 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
407 TestStartStop/group/old-k8s-version/serial/SecondStart 53.53
408 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
409 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
410 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
411 TestStartStop/group/old-k8s-version/serial/Pause 3.27
415 TestStartStop/group/embed-certs/serial/FirstStart 56.4
416 TestStartStop/group/embed-certs/serial/DeployApp 9.34
417 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.08
418 TestStartStop/group/embed-certs/serial/Stop 12.14
419 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
420 TestStartStop/group/embed-certs/serial/SecondStart 50.23
421 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
422 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
423 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
424 TestStartStop/group/embed-certs/serial/Pause 3.05
426 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 50.12
427 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.35
428 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.11
429 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.09
430 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
431 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 48.19
432 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
433 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
434 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
435 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.04
440 TestStartStop/group/no-preload/serial/Stop 1.34
441 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
443 TestStartStop/group/newest-cni/serial/DeployApp 0
445 TestStartStop/group/newest-cni/serial/Stop 1.31
446 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
449 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
450 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
451 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
453 TestNetworkPlugins/group/auto/Start 52.13
454 TestNetworkPlugins/group/auto/KubeletFlags 0.32
455 TestNetworkPlugins/group/auto/NetCatPod 8.26
456 TestNetworkPlugins/group/auto/DNS 0.17
457 TestNetworkPlugins/group/auto/Localhost 0.15
458 TestNetworkPlugins/group/auto/HairPin 0.15
459 TestNetworkPlugins/group/flannel/Start 55.59
460 TestNetworkPlugins/group/flannel/ControllerPod 6.01
461 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
462 TestNetworkPlugins/group/flannel/NetCatPod 10.27
463 TestNetworkPlugins/group/flannel/DNS 0.17
464 TestNetworkPlugins/group/flannel/Localhost 0.14
465 TestNetworkPlugins/group/flannel/HairPin 0.16
466 TestNetworkPlugins/group/calico/Start 58.28
468 TestNetworkPlugins/group/calico/ControllerPod 6.01
469 TestNetworkPlugins/group/calico/KubeletFlags 0.33
470 TestNetworkPlugins/group/calico/NetCatPod 10.26
471 TestNetworkPlugins/group/calico/DNS 0.2
472 TestNetworkPlugins/group/calico/Localhost 0.17
473 TestNetworkPlugins/group/calico/HairPin 0.15
474 TestNetworkPlugins/group/custom-flannel/Start 64.72
475 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
476 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.26
477 TestNetworkPlugins/group/custom-flannel/DNS 0.2
478 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
479 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
480 TestNetworkPlugins/group/kindnet/Start 52.07
481 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
482 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
483 TestNetworkPlugins/group/kindnet/NetCatPod 9.29
484 TestNetworkPlugins/group/kindnet/DNS 0.19
485 TestNetworkPlugins/group/kindnet/Localhost 0.16
486 TestNetworkPlugins/group/kindnet/HairPin 0.14
487 TestNetworkPlugins/group/bridge/Start 72.54
488 TestNetworkPlugins/group/enable-default-cni/Start 76.67
489 TestNetworkPlugins/group/bridge/KubeletFlags 0.43
490 TestNetworkPlugins/group/bridge/NetCatPod 10.42
491 TestNetworkPlugins/group/bridge/DNS 0.26
492 TestNetworkPlugins/group/bridge/Localhost 0.21
493 TestNetworkPlugins/group/bridge/HairPin 0.17
494 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
495 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.29
496 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
497 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
498 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
x
+
TestDownloadOnly/v1.28.0/json-events (6.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-814859 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-814859 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.777338594s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1213 10:12:37.430814  308915 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1213 10:12:37.430916  308915 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-814859
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-814859: exit status 85 (92.865334ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-814859 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-814859 │ jenkins │ v1.37.0 │ 13 Dec 25 10:12 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:12:30
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:12:30.696987  308920 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:12:30.697127  308920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:12:30.697139  308920 out.go:374] Setting ErrFile to fd 2...
	I1213 10:12:30.697144  308920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:12:30.697494  308920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	W1213 10:12:30.697673  308920 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22127-307042/.minikube/config/config.json: open /home/jenkins/minikube-integration/22127-307042/.minikube/config/config.json: no such file or directory
	I1213 10:12:30.698108  308920 out.go:368] Setting JSON to true
	I1213 10:12:30.698975  308920 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10504,"bootTime":1765610247,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 10:12:30.699070  308920 start.go:143] virtualization:  
	I1213 10:12:30.705676  308920 out.go:99] [download-only-814859] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1213 10:12:30.705853  308920 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball: no such file or directory
	I1213 10:12:30.705984  308920 notify.go:221] Checking for updates...
	I1213 10:12:30.709623  308920 out.go:171] MINIKUBE_LOCATION=22127
	I1213 10:12:30.712498  308920 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:12:30.715360  308920 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:12:30.718220  308920 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 10:12:30.720956  308920 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1213 10:12:30.726582  308920 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 10:12:30.726981  308920 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:12:30.752888  308920 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:12:30.753008  308920 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:12:30.816725  308920 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-13 10:12:30.807469119 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:12:30.816835  308920 docker.go:319] overlay module found
	I1213 10:12:30.819825  308920 out.go:99] Using the docker driver based on user configuration
	I1213 10:12:30.819862  308920 start.go:309] selected driver: docker
	I1213 10:12:30.819879  308920 start.go:927] validating driver "docker" against <nil>
	I1213 10:12:30.819986  308920 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:12:30.874391  308920 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-13 10:12:30.864637594 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:12:30.874554  308920 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:12:30.874920  308920 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1213 10:12:30.875072  308920 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 10:12:30.878219  308920 out.go:171] Using Docker driver with root privileges
	I1213 10:12:30.881173  308920 cni.go:84] Creating CNI manager for ""
	I1213 10:12:30.881253  308920 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:12:30.881267  308920 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 10:12:30.881348  308920 start.go:353] cluster config:
	{Name:download-only-814859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-814859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:12:30.884397  308920 out.go:99] Starting "download-only-814859" primary control-plane node in "download-only-814859" cluster
	I1213 10:12:30.884426  308920 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 10:12:30.887375  308920 out.go:99] Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:12:30.887439  308920 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1213 10:12:30.887534  308920 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:12:30.903474  308920 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 10:12:30.903688  308920 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1213 10:12:30.903785  308920 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 10:12:30.952375  308920 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1213 10:12:30.952409  308920 cache.go:65] Caching tarball of preloaded images
	I1213 10:12:30.952595  308920 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1213 10:12:30.955895  308920 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1213 10:12:30.955925  308920 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1213 10:12:31.039208  308920 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1213 10:12:31.039369  308920 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1213 10:12:34.437284  308920 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1213 10:12:34.437881  308920 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/download-only-814859/config.json ...
	I1213 10:12:34.438014  308920 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/download-only-814859/config.json: {Name:mkc29a5eef15e65259775c317eac984e6a2ea156 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:12:34.438244  308920 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1213 10:12:34.438615  308920 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22127-307042/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-814859 host does not exist
	  To start a cluster, run: "minikube start -p download-only-814859"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-814859
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (4.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-849173 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-849173 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.027547004s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (4.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1213 10:12:41.905745  308915 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
I1213 10:12:41.905786  308915 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-849173
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-849173: exit status 85 (427.23326ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-814859 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-814859 │ jenkins │ v1.37.0 │ 13 Dec 25 10:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 10:12 UTC │ 13 Dec 25 10:12 UTC │
	│ delete  │ -p download-only-814859                                                                                                                                                               │ download-only-814859 │ jenkins │ v1.37.0 │ 13 Dec 25 10:12 UTC │ 13 Dec 25 10:12 UTC │
	│ start   │ -o=json --download-only -p download-only-849173 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-849173 │ jenkins │ v1.37.0 │ 13 Dec 25 10:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:12:37
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:12:37.919227  309123 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:12:37.919344  309123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:12:37.919354  309123 out.go:374] Setting ErrFile to fd 2...
	I1213 10:12:37.919360  309123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:12:37.919615  309123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 10:12:37.920020  309123 out.go:368] Setting JSON to true
	I1213 10:12:37.920825  309123 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10511,"bootTime":1765610247,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 10:12:37.920895  309123 start.go:143] virtualization:  
	I1213 10:12:37.924223  309123 out.go:99] [download-only-849173] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:12:37.924594  309123 notify.go:221] Checking for updates...
	I1213 10:12:37.928231  309123 out.go:171] MINIKUBE_LOCATION=22127
	I1213 10:12:37.931272  309123 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:12:37.934160  309123 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:12:37.937188  309123 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 10:12:37.940006  309123 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1213 10:12:37.945879  309123 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 10:12:37.946132  309123 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:12:37.972723  309123 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:12:37.972856  309123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:12:38.030549  309123 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-13 10:12:38.020481709 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:12:38.030666  309123 docker.go:319] overlay module found
	I1213 10:12:38.033695  309123 out.go:99] Using the docker driver based on user configuration
	I1213 10:12:38.033743  309123 start.go:309] selected driver: docker
	I1213 10:12:38.033752  309123 start.go:927] validating driver "docker" against <nil>
	I1213 10:12:38.033870  309123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:12:38.092629  309123 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-13 10:12:38.083129472 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:12:38.092791  309123 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:12:38.093057  309123 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1213 10:12:38.093234  309123 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 10:12:38.096385  309123 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-849173 host does not exist
	  To start a cluster, run: "minikube start -p download-only-849173"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-849173
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (4.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-677138 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-677138 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.138141919s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (4.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1213 10:12:46.870134  308915 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1213 10:12:46.870170  308915 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-677138
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-677138: exit status 85 (83.544786ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                             ARGS                                                                                             │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-814859 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-814859 │ jenkins │ v1.37.0 │ 13 Dec 25 10:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 10:12 UTC │ 13 Dec 25 10:12 UTC │
	│ delete  │ -p download-only-814859                                                                                                                                                                      │ download-only-814859 │ jenkins │ v1.37.0 │ 13 Dec 25 10:12 UTC │ 13 Dec 25 10:12 UTC │
	│ start   │ -o=json --download-only -p download-only-849173 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-849173 │ jenkins │ v1.37.0 │ 13 Dec 25 10:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 10:12 UTC │ 13 Dec 25 10:12 UTC │
	│ delete  │ -p download-only-849173                                                                                                                                                                      │ download-only-849173 │ jenkins │ v1.37.0 │ 13 Dec 25 10:12 UTC │ 13 Dec 25 10:12 UTC │
	│ start   │ -o=json --download-only -p download-only-677138 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-677138 │ jenkins │ v1.37.0 │ 13 Dec 25 10:12 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:12:42
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:12:42.777015  309322 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:12:42.777184  309322 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:12:42.777215  309322 out.go:374] Setting ErrFile to fd 2...
	I1213 10:12:42.777236  309322 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:12:42.777519  309322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 10:12:42.777956  309322 out.go:368] Setting JSON to true
	I1213 10:12:42.778810  309322 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10516,"bootTime":1765610247,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 10:12:42.778905  309322 start.go:143] virtualization:  
	I1213 10:12:42.782549  309322 out.go:99] [download-only-677138] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:12:42.782830  309322 notify.go:221] Checking for updates...
	I1213 10:12:42.786139  309322 out.go:171] MINIKUBE_LOCATION=22127
	I1213 10:12:42.789403  309322 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:12:42.792577  309322 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:12:42.795660  309322 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 10:12:42.798764  309322 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1213 10:12:42.804747  309322 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 10:12:42.805064  309322 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:12:42.827764  309322 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:12:42.827877  309322 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:12:42.903295  309322 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-13 10:12:42.893580424 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:12:42.903401  309322 docker.go:319] overlay module found
	I1213 10:12:42.906571  309322 out.go:99] Using the docker driver based on user configuration
	I1213 10:12:42.906617  309322 start.go:309] selected driver: docker
	I1213 10:12:42.906626  309322 start.go:927] validating driver "docker" against <nil>
	I1213 10:12:42.906754  309322 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:12:42.971793  309322 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-13 10:12:42.963204114 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:12:42.971952  309322 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:12:42.972218  309322 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1213 10:12:42.972366  309322 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 10:12:42.975520  309322 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-677138 host does not exist
	  To start a cluster, run: "minikube start -p download-only-677138"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-677138
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1213 10:12:48.177711  308915 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-261308 --alsologtostderr --binary-mirror http://127.0.0.1:39087 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "binary-mirror-261308" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-261308
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-672850
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-672850: exit status 85 (84.073087ms)

                                                
                                                
-- stdout --
	* Profile "addons-672850" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-672850"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-672850
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-672850: exit status 85 (94.338176ms)

                                                
                                                
-- stdout --
	* Profile "addons-672850" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-672850"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (143.2s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-672850 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-672850 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m23.193929527s)
--- PASS: TestAddons/Setup (143.20s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.86s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:886: volcano-controller stabilized in 51.220316ms
addons_test.go:878: volcano-admission stabilized in 52.026643ms
addons_test.go:870: volcano-scheduler stabilized in 52.098955ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-76c996c8bf-vxgtw" [54c6c156-063e-4d1f-9a97-2fcdaa8ac9df] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003974744s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-6c447bd768-dfrrl" [59041091-9bf3-4050-9161-70e671be20ba] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003414949s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-6fd4f85cb8-7557v" [ea9644e0-7d45-4fc2-9a8a-cbfd88fa6126] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004171016s
addons_test.go:905: (dbg) Run:  kubectl --context addons-672850 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-672850 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-672850 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [ac024b87-34ac-4723-8ba3-55866e35389d] Pending
helpers_test.go:353: "test-job-nginx-0" [ac024b87-34ac-4723-8ba3-55866e35389d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [ac024b87-34ac-4723-8ba3-55866e35389d] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.002920589s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-672850 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-672850 addons disable volcano --alsologtostderr -v=1: (12.152509737s)
--- PASS: TestAddons/serial/Volcano (40.86s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-672850 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-672850 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.95s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-672850 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-672850 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [0d3f1733-e2bb-405a-b650-c43ef3b94011] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [0d3f1733-e2bb-405a-b650-c43ef3b94011] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003745111s
addons_test.go:696: (dbg) Run:  kubectl --context addons-672850 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-672850 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-672850 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-672850 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.95s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 4.83388ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-xlxf5" [9e33bc4e-db65-44de-891d-76fd2e8f96d9] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004187232s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-s2ldd" [5588e356-a163-4e17-a0a8-d61b8f16f8e1] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004579718s
addons_test.go:394: (dbg) Run:  kubectl --context addons-672850 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-672850 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-672850 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.403064791s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-672850 ip
2025/12/13 10:16:29 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-672850 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.52s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.78s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 4.45767ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-672850
addons_test.go:334: (dbg) Run:  kubectl --context addons-672850 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-672850 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.78s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (16.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-672850 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-672850 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-672850 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [2adecff6-0e39-40d4-b72b-04665b315711] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [2adecff6-0e39-40d4-b72b-04665b315711] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 6.005649947s
I1213 10:17:45.255695  308915 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-672850 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-672850 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-672850 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-672850 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-672850 addons disable ingress-dns --alsologtostderr -v=1: (1.278559974s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-672850 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-672850 addons disable ingress --alsologtostderr -v=1: (7.839172836s)
--- PASS: TestAddons/parallel/Ingress (16.80s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.76s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-vp8hd" [2bfac03a-f3fd-4039-923a-d23285af9392] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003849931s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-672850 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-672850 addons disable inspektor-gadget --alsologtostderr -v=1: (5.751058953s)
--- PASS: TestAddons/parallel/InspektorGadget (11.76s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.84s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 4.732807ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-57d2f" [f3cc64e5-2255-4861-bfcd-88da81d2ba5d] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004056857s
addons_test.go:465: (dbg) Run:  kubectl --context addons-672850 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-672850 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.84s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1213 10:16:24.227863  308915 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1213 10:16:24.231779  308915 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1213 10:16:24.231806  308915 kapi.go:107] duration metric: took 6.757584ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 6.768842ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-672850 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-672850 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [c0ec2aaf-fccd-49a5-8d3d-a3c81ba23d03] Pending
helpers_test.go:353: "task-pv-pod" [c0ec2aaf-fccd-49a5-8d3d-a3c81ba23d03] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [c0ec2aaf-fccd-49a5-8d3d-a3c81ba23d03] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.003628834s
addons_test.go:574: (dbg) Run:  kubectl --context addons-672850 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-672850 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-672850 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-672850 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-672850 delete pod task-pv-pod: (1.021096909s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-672850 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-672850 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-672850 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [6c27404c-89a3-4135-a5dc-df5a8386d4d0] Pending
helpers_test.go:353: "task-pv-pod-restore" [6c27404c-89a3-4135-a5dc-df5a8386d4d0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [6c27404c-89a3-4135-a5dc-df5a8386d4d0] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003205459s
addons_test.go:616: (dbg) Run:  kubectl --context addons-672850 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-672850 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-672850 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-672850 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-672850 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-672850 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.953658912s)
--- PASS: TestAddons/parallel/CSI (54.32s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-672850 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-672850 --alsologtostderr -v=1: (1.033123208s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-fr2kq" [897878fc-8dc4-4de0-bfe6-b325805eab20] Pending
helpers_test.go:353: "headlamp-dfcdc64b-fr2kq" [897878fc-8dc4-4de0-bfe6-b325805eab20] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-fr2kq" [897878fc-8dc4-4de0-bfe6-b325805eab20] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003333547s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-672850 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (11.31s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-kdklh" [e60a8c31-30ad-4df1-a317-a4d22468cf22] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004105882s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-672850 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.35s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-672850 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-672850 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-672850 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [997e6a7c-9e05-489a-a5d9-3fb51f5ef835] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [997e6a7c-9e05-489a-a5d9-3fb51f5ef835] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [997e6a7c-9e05-489a-a5d9-3fb51f5ef835] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003488763s
addons_test.go:969: (dbg) Run:  kubectl --context addons-672850 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-672850 ssh "cat /opt/local-path-provisioner/pvc-5dd29cad-96a6-4811-ba61-1a80acff4469_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-672850 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-672850 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-672850 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-672850 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.10782084s)
--- PASS: TestAddons/parallel/LocalPath (51.35s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-bqnlw" [b5f6c128-f89f-4d42-9226-0260cee40f24] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003255426s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-672850 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.66s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-drf9d" [2a941f3d-e8f3-47af-aeb2-7e1fcbd3ebea] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005276335s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-672850 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-672850 addons disable yakd --alsologtostderr -v=1: (5.886126668s)
--- PASS: TestAddons/parallel/Yakd (11.89s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-672850
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-672850: (12.076690068s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-672850
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-672850
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-672850
--- PASS: TestAddons/StoppedEnableDisable (12.37s)

                                                
                                    
x
+
TestCertOptions (39.05s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-225037 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-225037 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (36.195658629s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-225037 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-225037 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-225037 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-225037" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-225037
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-225037: (2.095984686s)
--- PASS: TestCertOptions (39.05s)

                                                
                                    
x
+
TestCertExpiration (230.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-086397 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E1213 11:33:11.163410  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-086397 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (39.400197686s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-086397 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
E1213 11:36:48.080787  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-086397 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.657578291s)
helpers_test.go:176: Cleaning up "cert-expiration-086397" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-086397
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-086397: (3.24064639s)
--- PASS: TestCertExpiration (230.30s)

                                                
                                    
x
+
TestForceSystemdFlag (34.35s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-918030 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-918030 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (31.941778957s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-918030 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-flag-918030" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-918030
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-918030: (2.096050916s)
--- PASS: TestForceSystemdFlag (34.35s)

                                                
                                    
x
+
TestForceSystemdEnv (37.75s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-835611 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-835611 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (34.97085737s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-835611 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-env-835611" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-835611
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-835611: (2.336670809s)
--- PASS: TestForceSystemdEnv (37.75s)

                                                
                                    
x
+
TestDockerEnvContainerd (51.54s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-403574 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-403574 --driver=docker  --container-runtime=containerd: (35.529870035s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-403574"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-403574": (1.130847355s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Obb0PqLBqKsW/agent.328779" SSH_AGENT_PID="328780" DOCKER_HOST=ssh://docker@127.0.0.1:33110 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Obb0PqLBqKsW/agent.328779" SSH_AGENT_PID="328780" DOCKER_HOST=ssh://docker@127.0.0.1:33110 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Obb0PqLBqKsW/agent.328779" SSH_AGENT_PID="328780" DOCKER_HOST=ssh://docker@127.0.0.1:33110 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.257544472s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Obb0PqLBqKsW/agent.328779" SSH_AGENT_PID="328780" DOCKER_HOST=ssh://docker@127.0.0.1:33110 docker image ls"
docker_test.go:250: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Obb0PqLBqKsW/agent.328779" SSH_AGENT_PID="328780" DOCKER_HOST=ssh://docker@127.0.0.1:33110 docker image ls": (1.008970275s)
helpers_test.go:176: Cleaning up "dockerenv-403574" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-403574
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-403574: (2.076499276s)
--- PASS: TestDockerEnvContainerd (51.54s)

                                                
                                    
x
+
TestErrorSpam/setup (32.6s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-462625 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-462625 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-462625 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-462625 --driver=docker  --container-runtime=containerd: (32.595925384s)
--- PASS: TestErrorSpam/setup (32.60s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-462625 --log_dir /tmp/nospam-462625 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-462625 --log_dir /tmp/nospam-462625 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-462625 --log_dir /tmp/nospam-462625 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.23s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-462625 --log_dir /tmp/nospam-462625 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-462625 --log_dir /tmp/nospam-462625 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-462625 --log_dir /tmp/nospam-462625 status
--- PASS: TestErrorSpam/status (1.23s)

                                                
                                    
x
+
TestErrorSpam/pause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-462625 --log_dir /tmp/nospam-462625 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-462625 --log_dir /tmp/nospam-462625 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-462625 --log_dir /tmp/nospam-462625 pause
--- PASS: TestErrorSpam/pause (1.75s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-462625 --log_dir /tmp/nospam-462625 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-462625 --log_dir /tmp/nospam-462625 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-462625 --log_dir /tmp/nospam-462625 unpause
--- PASS: TestErrorSpam/unpause (1.85s)

                                                
                                    
x
+
TestErrorSpam/stop (1.67s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-462625 --log_dir /tmp/nospam-462625 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-462625 --log_dir /tmp/nospam-462625 stop: (1.460904276s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-462625 --log_dir /tmp/nospam-462625 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-462625 --log_dir /tmp/nospam-462625 stop
--- PASS: TestErrorSpam/stop (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.19s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-319494 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1213 10:20:12.248746  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:20:12.255595  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:20:12.266947  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:20:12.288302  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:20:12.329672  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:20:12.411075  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:20:12.572581  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:20:12.894305  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:20:13.536399  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:20:14.817977  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:20:17.379912  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:20:22.502227  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:20:32.744216  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-319494 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (49.1877473s)
--- PASS: TestFunctional/serial/StartWithProxy (49.19s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.64s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1213 10:20:40.452360  308915 config.go:182] Loaded profile config "functional-319494": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-319494 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-319494 --alsologtostderr -v=8: (7.636569711s)
functional_test.go:678: soft start took 7.640525255s for "functional-319494" cluster.
I1213 10:20:48.089384  308915 config.go:182] Loaded profile config "functional-319494": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (7.64s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-319494 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-319494 cache add registry.k8s.io/pause:3.1: (1.288103781s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-319494 cache add registry.k8s.io/pause:3.3: (1.14972337s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-319494 cache add registry.k8s.io/pause:latest: (1.036929873s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-319494 /tmp/TestFunctionalserialCacheCmdcacheadd_local3989847243/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 cache add minikube-local-cache-test:functional-319494
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 cache delete minikube-local-cache-test:functional-319494
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-319494
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh sudo crictl images
E1213 10:20:53.226475  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-319494 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (317.563975ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-arm64 -p functional-319494 cache reload: (1.008269627s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 kubectl -- --context functional-319494 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-319494 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.78s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-319494 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1213 10:21:34.189230  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-319494 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.777767561s)
functional_test.go:776: restart took 42.777854405s for "functional-319494" cluster.
I1213 10:21:38.566851  308915 config.go:182] Loaded profile config "functional-319494": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (42.78s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-319494 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-319494 logs: (1.440354981s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 logs --file /tmp/TestFunctionalserialLogsFileCmd375331507/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-319494 logs --file /tmp/TestFunctionalserialLogsFileCmd375331507/001/logs.txt: (1.437452878s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.69s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-319494 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-319494
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-319494: exit status 115 (541.741434ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30943 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-319494 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.69s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-319494 config get cpus: exit status 14 (84.379619ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-319494 config get cpus: exit status 14 (61.200099ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (5.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-319494 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-319494 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 343896: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (5.87s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-319494 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-319494 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (236.198471ms)

                                                
                                                
-- stdout --
	* [functional-319494] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:22:17.085693  343618 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:22:17.085809  343618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:22:17.085853  343618 out.go:374] Setting ErrFile to fd 2...
	I1213 10:22:17.085860  343618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:22:17.086245  343618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 10:22:17.086633  343618 out.go:368] Setting JSON to false
	I1213 10:22:17.087753  343618 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11090,"bootTime":1765610247,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 10:22:17.087821  343618 start.go:143] virtualization:  
	I1213 10:22:17.091081  343618 out.go:179] * [functional-319494] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:22:17.094208  343618 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:22:17.094286  343618 notify.go:221] Checking for updates...
	I1213 10:22:17.100615  343618 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:22:17.103522  343618 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:22:17.106380  343618 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 10:22:17.109258  343618 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:22:17.112202  343618 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:22:17.115802  343618 config.go:182] Loaded profile config "functional-319494": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 10:22:17.116591  343618 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:22:17.154305  343618 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:22:17.154516  343618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:22:17.224443  343618 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 10:22:17.215223073 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:22:17.224550  343618 docker.go:319] overlay module found
	I1213 10:22:17.227748  343618 out.go:179] * Using the docker driver based on existing profile
	I1213 10:22:17.230628  343618 start.go:309] selected driver: docker
	I1213 10:22:17.230651  343618 start.go:927] validating driver "docker" against &{Name:functional-319494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-319494 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:22:17.230854  343618 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:22:17.234444  343618 out.go:203] 
	W1213 10:22:17.237539  343618 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 10:22:17.240437  343618 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-319494 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-319494 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-319494 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (249.384784ms)

                                                
                                                
-- stdout --
	* [functional-319494] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:22:16.817787  343522 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:22:16.817919  343522 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:22:16.817931  343522 out.go:374] Setting ErrFile to fd 2...
	I1213 10:22:16.817937  343522 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:22:16.819010  343522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 10:22:16.819410  343522 out.go:368] Setting JSON to false
	I1213 10:22:16.827659  343522 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11090,"bootTime":1765610247,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 10:22:16.827743  343522 start.go:143] virtualization:  
	I1213 10:22:16.831433  343522 out.go:179] * [functional-319494] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1213 10:22:16.834594  343522 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:22:16.834790  343522 notify.go:221] Checking for updates...
	I1213 10:22:16.841066  343522 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:22:16.844005  343522 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:22:16.846880  343522 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 10:22:16.849788  343522 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:22:16.852609  343522 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:22:16.855989  343522 config.go:182] Loaded profile config "functional-319494": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 10:22:16.856607  343522 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:22:16.889488  343522 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:22:16.889623  343522 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:22:16.983606  343522 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 10:22:16.967798841 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:22:16.983724  343522 docker.go:319] overlay module found
	I1213 10:22:16.989248  343522 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1213 10:22:16.992081  343522 start.go:309] selected driver: docker
	I1213 10:22:16.992103  343522 start.go:927] validating driver "docker" against &{Name:functional-319494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-319494 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:22:16.992202  343522 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:22:16.995874  343522 out.go:203] 
	W1213 10:22:17.000784  343522 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 10:22:17.003868  343522 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-319494 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-319494 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-694h8" [af8335d1-5132-404e-a435-022646e06eec] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
I1213 10:21:57.178240  308915 retry.go:31] will retry after 2.675804021s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:908f8b06-472e-4b30-93fd-0b3631b6897a ResourceVersion:595 Generation:0 CreationTimestamp:2025-12-13 10:21:53 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0x4001715490 VolumeMode:0x40017154a0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
helpers_test.go:353: "hello-node-connect-7d85dfc575-694h8" [af8335d1-5132-404e-a435-022646e06eec] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003717243s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30937
functional_test.go:1680: http://192.168.49.2:30937: success! body:
Request served by hello-node-connect-7d85dfc575-694h8

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30937
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [d9f9e5f9-3ac6-4b7f-8314-cd87a9a0ea8e] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003759075s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-319494 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-319494 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-319494 get pvc myclaim -o=json
I1213 10:21:54.077390  308915 retry.go:31] will retry after 2.944100015s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:908f8b06-472e-4b30-93fd-0b3631b6897a ResourceVersion:595 Generation:0 CreationTimestamp:2025-12-13 10:21:53 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0x4001bcfbc0 VolumeMode:0x4001bcfbd0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-319494 get pvc myclaim -o=json
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-319494 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-319494 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [4e8267bf-eaf2-466b-b4c6-2fee62542116] Pending
helpers_test.go:353: "sp-pod" [4e8267bf-eaf2-466b-b4c6-2fee62542116] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.00411542s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-319494 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-319494 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-319494 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [261eff33-811a-4471-8bc7-388fd2129cbd] Pending
helpers_test.go:353: "sp-pod" [261eff33-811a-4471-8bc7-388fd2129cbd] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.00402086s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-319494 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh -n functional-319494 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 cp functional-319494:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1078418737/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh -n functional-319494 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh -n functional-319494 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.48s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/308915/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh "sudo cat /etc/test/nested/copy/308915/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/308915.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh "sudo cat /etc/ssl/certs/308915.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/308915.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh "sudo cat /usr/share/ca-certificates/308915.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3089152.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh "sudo cat /etc/ssl/certs/3089152.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3089152.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh "sudo cat /usr/share/ca-certificates/3089152.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-319494 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-319494 ssh "sudo systemctl is-active docker": exit status 1 (363.096397ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-319494 ssh "sudo systemctl is-active crio": exit status 1 (441.676061ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-319494 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-319494 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-319494 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 341250: os: process already finished
helpers_test.go:520: unable to terminate pid 341041: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-319494 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-319494 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-319494 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [1afad265-3989-49af-aff2-f9b409f2c695] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [1afad265-3989-49af-aff2-f9b409f2c695] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003605054s
I1213 10:21:56.556249  308915 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-319494 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.13.246 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-319494 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-319494 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-319494 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-288d2" [a698fba5-dac9-4f8e-9f35-1b93415b745c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-288d2" [a698fba5-dac9-4f8e-9f35-1b93415b745c] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003369746s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 service list -o json
functional_test.go:1504: Took "521.656156ms" to run "out/minikube-linux-arm64 -p functional-319494 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31947
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31947
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-319494 /tmp/TestFunctionalparallelMountCmdany-port1400551886/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765621334097665711" to /tmp/TestFunctionalparallelMountCmdany-port1400551886/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765621334097665711" to /tmp/TestFunctionalparallelMountCmdany-port1400551886/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765621334097665711" to /tmp/TestFunctionalparallelMountCmdany-port1400551886/001/test-1765621334097665711
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-319494 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (507.798402ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 10:22:14.606482  308915 retry.go:31] will retry after 651.456546ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 10:22 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 10:22 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 10:22 test-1765621334097665711
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh cat /mount-9p/test-1765621334097665711
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-319494 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [dc2b91a9-d0b8-40ed-9b70-ae61652324e3] Pending
helpers_test.go:353: "busybox-mount" [dc2b91a9-d0b8-40ed-9b70-ae61652324e3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [dc2b91a9-d0b8-40ed-9b70-ae61652324e3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [dc2b91a9-d0b8-40ed-9b70-ae61652324e3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.006020355s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-319494 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh stat /mount-9p/created-by-pod
2025/12/13 10:22:23 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-319494 /tmp/TestFunctionalparallelMountCmdany-port1400551886/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.91s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "481.571476ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "58.027922ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "415.514978ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "90.273779ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-319494 version -o=json --components: (1.424886686s)
--- PASS: TestFunctional/parallel/Version/components (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-319494 /tmp/TestFunctionalparallelMountCmdspecific-port293416253/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-319494 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (451.009298ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 10:22:24.460846  308915 retry.go:31] will retry after 685.56105ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-319494 /tmp/TestFunctionalparallelMountCmdspecific-port293416253/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-319494 ssh "sudo umount -f /mount-9p": exit status 1 (365.11733ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-319494 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-319494 /tmp/TestFunctionalparallelMountCmdspecific-port293416253/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-319494 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-319494
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-319494
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-319494 image ls --format short --alsologtostderr:
I1213 10:22:32.655639  346702 out.go:360] Setting OutFile to fd 1 ...
I1213 10:22:32.655817  346702 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:22:32.655846  346702 out.go:374] Setting ErrFile to fd 2...
I1213 10:22:32.655868  346702 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:22:32.656138  346702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
I1213 10:22:32.656833  346702 config.go:182] Loaded profile config "functional-319494": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 10:22:32.656999  346702 config.go:182] Loaded profile config "functional-319494": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 10:22:32.657577  346702 cli_runner.go:164] Run: docker container inspect functional-319494 --format={{.State.Status}}
I1213 10:22:32.685064  346702 ssh_runner.go:195] Run: systemctl --version
I1213 10:22:32.685119  346702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-319494
I1213 10:22:32.707560  346702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-319494/id_rsa Username:docker}
I1213 10:22:32.817808  346702 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-319494 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                    IMAGE                    │                  TAG                  │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ registry.k8s.io/etcd                        │ 3.6.5-0                               │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.2                               │ sha256:94bff1 │ 22.8MB │
│ registry.k8s.io/pause                       │ 3.1                                   │ sha256:8057e0 │ 262kB  │
│ docker.io/kicbase/echo-server               │ functional-319494                     │ sha256:ce2d2c │ 2.17MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc                          │ sha256:1611cd │ 1.94MB │
│ public.ecr.aws/nginx/nginx                  │ alpine                                │ sha256:10afed │ 23MB   │
│ registry.k8s.io/kube-scheduler              │ v1.34.2                               │ sha256:4f982e │ 15.8MB │
│ registry.k8s.io/pause                       │ 3.3                                   │ sha256:3d1873 │ 249kB  │
│ registry.k8s.io/pause                       │ 3.10.1                                │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ latest                                │ sha256:8cb209 │ 71.3kB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b                    │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/kindest/kindnetd                  │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ sha256:c96ee3 │ 38.5MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                                    │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1                               │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.2                               │ sha256:b178af │ 24.6MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2                               │ sha256:1b3491 │ 20.7MB │
│ docker.io/library/minikube-local-cache-test │ functional-319494                     │ sha256:3e30c5 │ 991B   │
└─────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-319494 image ls --format table --alsologtostderr:
I1213 10:22:33.437073  346937 out.go:360] Setting OutFile to fd 1 ...
I1213 10:22:33.437202  346937 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:22:33.437212  346937 out.go:374] Setting ErrFile to fd 2...
I1213 10:22:33.437217  346937 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:22:33.437474  346937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
I1213 10:22:33.438115  346937 config.go:182] Loaded profile config "functional-319494": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 10:22:33.438232  346937 config.go:182] Loaded profile config "functional-319494": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 10:22:33.438776  346937 cli_runner.go:164] Run: docker container inspect functional-319494 --format={{.State.Status}}
I1213 10:22:33.457624  346937 ssh_runner.go:195] Run: systemctl --version
I1213 10:22:33.457675  346937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-319494
I1213 10:22:33.485400  346937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-319494/id_rsa Username:docker}
I1213 10:22:33.598191  346937 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-319494 image ls --format json --alsologtostderr:
[{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"24559643"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:10afed3caf3eed1b711b8fa0a9600a7b488a45653a
15a598a47ac570c1204cc4","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"22985759"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"22802260"},{"id":"sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"15775785"}
,{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-319494"],"size":"2173567"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3
dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21136588"},{"id":"sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"20718696"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29
.0-alpha-105-g20ccfc88"],"size":"38502448"},{"id":"sha256:3e30c52a5eb43a8e5ba840b7293fbdeceebf98349701321a36a877e21e3b575a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-319494"],"size":"991"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-319494 image ls --format json --alsologtostderr:
I1213 10:22:33.154486  346852 out.go:360] Setting OutFile to fd 1 ...
I1213 10:22:33.155054  346852 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:22:33.155086  346852 out.go:374] Setting ErrFile to fd 2...
I1213 10:22:33.155105  346852 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:22:33.155403  346852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
I1213 10:22:33.156103  346852 config.go:182] Loaded profile config "functional-319494": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 10:22:33.156272  346852 config.go:182] Loaded profile config "functional-319494": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 10:22:33.156837  346852 cli_runner.go:164] Run: docker container inspect functional-319494 --format={{.State.Status}}
I1213 10:22:33.177013  346852 ssh_runner.go:195] Run: systemctl --version
I1213 10:22:33.177067  346852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-319494
I1213 10:22:33.201281  346852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-319494/id_rsa Username:docker}
I1213 10:22:33.314681  346852 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-319494 image ls --format yaml --alsologtostderr:
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "38502448"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"
- id: sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "22802260"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:3e30c52a5eb43a8e5ba840b7293fbdeceebf98349701321a36a877e21e3b575a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-319494
size: "991"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "22985759"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "15775785"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-319494
size: "2173567"
- id: sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "24559643"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "20718696"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-319494 image ls --format yaml --alsologtostderr:
I1213 10:22:32.846439  346767 out.go:360] Setting OutFile to fd 1 ...
I1213 10:22:32.846585  346767 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:22:32.846595  346767 out.go:374] Setting ErrFile to fd 2...
I1213 10:22:32.846599  346767 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:22:32.846865  346767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
I1213 10:22:32.847480  346767 config.go:182] Loaded profile config "functional-319494": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 10:22:32.847593  346767 config.go:182] Loaded profile config "functional-319494": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 10:22:32.848095  346767 cli_runner.go:164] Run: docker container inspect functional-319494 --format={{.State.Status}}
I1213 10:22:32.880362  346767 ssh_runner.go:195] Run: systemctl --version
I1213 10:22:32.880414  346767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-319494
I1213 10:22:32.905847  346767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-319494/id_rsa Username:docker}
I1213 10:22:33.014953  346767 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-319494 ssh pgrep buildkitd: exit status 1 (388.044923ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 image build -t localhost/my-image:functional-319494 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-319494 image build -t localhost/my-image:functional-319494 testdata/build --alsologtostderr: (3.153367615s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-319494 image build -t localhost/my-image:functional-319494 testdata/build --alsologtostderr:
I1213 10:22:33.306534  346900 out.go:360] Setting OutFile to fd 1 ...
I1213 10:22:33.307304  346900 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:22:33.307339  346900 out.go:374] Setting ErrFile to fd 2...
I1213 10:22:33.307361  346900 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:22:33.308327  346900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
I1213 10:22:33.309062  346900 config.go:182] Loaded profile config "functional-319494": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 10:22:33.311694  346900 config.go:182] Loaded profile config "functional-319494": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 10:22:33.319760  346900 cli_runner.go:164] Run: docker container inspect functional-319494 --format={{.State.Status}}
I1213 10:22:33.344997  346900 ssh_runner.go:195] Run: systemctl --version
I1213 10:22:33.345051  346900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-319494
I1213 10:22:33.367549  346900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33120 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-319494/id_rsa Username:docker}
I1213 10:22:33.473437  346900 build_images.go:162] Building image from path: /tmp/build.158195150.tar
I1213 10:22:33.473517  346900 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 10:22:33.483080  346900 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.158195150.tar
I1213 10:22:33.488322  346900 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.158195150.tar: stat -c "%s %y" /var/lib/minikube/build/build.158195150.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.158195150.tar': No such file or directory
I1213 10:22:33.488356  346900 ssh_runner.go:362] scp /tmp/build.158195150.tar --> /var/lib/minikube/build/build.158195150.tar (3072 bytes)
I1213 10:22:33.514630  346900 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.158195150
I1213 10:22:33.526292  346900 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.158195150 -xf /var/lib/minikube/build/build.158195150.tar
I1213 10:22:33.534544  346900 containerd.go:394] Building image: /var/lib/minikube/build/build.158195150
I1213 10:22:33.534637  346900 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.158195150 --local dockerfile=/var/lib/minikube/build/build.158195150 --output type=image,name=localhost/my-image:functional-319494
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:a8c498e2663720035cfcfa4adb8f935423f9079580111373a706fc49434f7467 0.0s done
#8 exporting config sha256:46baea440b80fce227610820d64e85f95bbe46f358c5b2251ebbfa928c512201 0.0s done
#8 naming to localhost/my-image:functional-319494 done
#8 DONE 0.2s
I1213 10:22:36.383805  346900 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.158195150 --local dockerfile=/var/lib/minikube/build/build.158195150 --output type=image,name=localhost/my-image:functional-319494: (2.849135902s)
I1213 10:22:36.383872  346900 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.158195150
I1213 10:22:36.391768  346900 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.158195150.tar
I1213 10:22:36.399621  346900 build_images.go:218] Built localhost/my-image:functional-319494 from /tmp/build.158195150.tar
I1213 10:22:36.399652  346900 build_images.go:134] succeeded building to: functional-319494
I1213 10:22:36.399658  346900 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-319494
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 image load --daemon kicbase/echo-server:functional-319494 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-319494 image load --daemon kicbase/echo-server:functional-319494 --alsologtostderr: (1.072061317s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-319494 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1319197955/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-319494 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1319197955/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-319494 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1319197955/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-319494 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-319494 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1319197955/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-319494 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1319197955/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-319494 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1319197955/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 image load --daemon kicbase/echo-server:functional-319494 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-319494 image load --daemon kicbase/echo-server:functional-319494 --alsologtostderr: (1.226337006s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-319494
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 image load --daemon kicbase/echo-server:functional-319494 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-arm64 -p functional-319494 image load --daemon kicbase/echo-server:functional-319494 --alsologtostderr: (1.109078639s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 image save kicbase/echo-server:functional-319494 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 image rm kicbase/echo-server:functional-319494 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-319494
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-319494 image save --daemon kicbase/echo-server:functional-319494 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-319494
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-319494
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-319494
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-319494
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-652709 cache add registry.k8s.io/pause:3.1: (1.095985045s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-652709 cache add registry.k8s.io/pause:3.3: (1.129645186s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-652709 cache add registry.k8s.io/pause:latest: (1.035783811s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-652709 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach1088979430/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 cache add minikube-local-cache-test:functional-652709
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 cache delete minikube-local-cache-test:functional-652709
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-652709
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-652709 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (295.971871ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2397316113/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-652709 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2397316113/001/logs.txt: (1.157010495s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-652709 config get cpus: exit status 14 (69.270778ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-652709 config get cpus: exit status 14 (69.992237ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-652709 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-652709 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (179.465744ms)

                                                
                                                
-- stdout --
	* [functional-652709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:52:06.257653  376712 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:52:06.257769  376712 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:52:06.257780  376712 out.go:374] Setting ErrFile to fd 2...
	I1213 10:52:06.257786  376712 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:52:06.258054  376712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 10:52:06.258434  376712 out.go:368] Setting JSON to false
	I1213 10:52:06.259306  376712 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12879,"bootTime":1765610247,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 10:52:06.259374  376712 start.go:143] virtualization:  
	I1213 10:52:06.262591  376712 out.go:179] * [functional-652709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:52:06.266455  376712 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:52:06.266534  376712 notify.go:221] Checking for updates...
	I1213 10:52:06.272708  376712 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:52:06.275800  376712 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:52:06.279318  376712 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 10:52:06.282348  376712 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:52:06.285296  376712 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:52:06.288731  376712 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:52:06.289369  376712 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:52:06.310980  376712 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:52:06.311111  376712 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:52:06.368804  376712 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:52:06.358388813 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:52:06.368913  376712 docker.go:319] overlay module found
	I1213 10:52:06.371937  376712 out.go:179] * Using the docker driver based on existing profile
	I1213 10:52:06.374752  376712 start.go:309] selected driver: docker
	I1213 10:52:06.374768  376712 start.go:927] validating driver "docker" against &{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:52:06.374884  376712 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:52:06.378420  376712 out.go:203] 
	W1213 10:52:06.381300  376712 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 10:52:06.384117  376712 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-652709 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-652709 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-652709 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (198.45572ms)

                                                
                                                
-- stdout --
	* [functional-652709] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:52:06.065660  376665 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:52:06.066155  376665 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:52:06.066172  376665 out.go:374] Setting ErrFile to fd 2...
	I1213 10:52:06.066178  376665 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:52:06.067095  376665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 10:52:06.067636  376665 out.go:368] Setting JSON to false
	I1213 10:52:06.068584  376665 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12879,"bootTime":1765610247,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 10:52:06.068732  376665 start.go:143] virtualization:  
	I1213 10:52:06.072154  376665 out.go:179] * [functional-652709] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1213 10:52:06.074371  376665 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:52:06.074438  376665 notify.go:221] Checking for updates...
	I1213 10:52:06.079948  376665 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:52:06.082671  376665 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 10:52:06.085538  376665 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 10:52:06.088374  376665 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:52:06.091176  376665 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:52:06.094440  376665 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:52:06.095085  376665 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:52:06.128365  376665 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:52:06.128573  376665 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:52:06.188243  376665 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:52:06.178446584 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:52:06.188369  376665 docker.go:319] overlay module found
	I1213 10:52:06.191511  376665 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1213 10:52:06.194286  376665 start.go:309] selected driver: docker
	I1213 10:52:06.194310  376665 start.go:927] validating driver "docker" against &{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:52:06.194419  376665 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:52:06.197939  376665 out.go:203] 
	W1213 10:52:06.200768  376665 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 10:52:06.203721  376665 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh -n functional-652709 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 cp functional-652709:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp425140614/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh -n functional-652709 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh -n functional-652709 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/308915/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh "sudo cat /etc/test/nested/copy/308915/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/308915.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh "sudo cat /etc/ssl/certs/308915.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/308915.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh "sudo cat /usr/share/ca-certificates/308915.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3089152.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh "sudo cat /etc/ssl/certs/3089152.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3089152.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh "sudo cat /usr/share/ca-certificates/3089152.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-652709 ssh "sudo systemctl is-active docker": exit status 1 (276.215605ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-652709 ssh "sudo systemctl is-active crio": exit status 1 (293.671141ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-652709 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-652709 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "336.434927ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "55.666312ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "338.921672ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "58.811156ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-652709 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2196735515/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-652709 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (350.590056ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 10:51:59.774929  308915 retry.go:31] will retry after 582.430099ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-652709 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2196735515/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-652709 ssh "sudo umount -f /mount-9p": exit status 1 (313.302506ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-652709 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-652709 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2196735515/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-652709 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1116597745/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-652709 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1116597745/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-652709 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1116597745/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-652709 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-652709 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1116597745/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-652709 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1116597745/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-652709 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1116597745/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-652709 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-652709
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-652709
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-652709 image ls --format short --alsologtostderr:
I1213 10:52:19.351956  378883 out.go:360] Setting OutFile to fd 1 ...
I1213 10:52:19.352099  378883 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:52:19.352117  378883 out.go:374] Setting ErrFile to fd 2...
I1213 10:52:19.352123  378883 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:52:19.352494  378883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
I1213 10:52:19.353468  378883 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 10:52:19.353613  378883 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 10:52:19.354162  378883 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
I1213 10:52:19.371408  378883 ssh_runner.go:195] Run: systemctl --version
I1213 10:52:19.371465  378883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
I1213 10:52:19.388219  378883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
I1213 10:52:19.493172  378883 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-652709 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0     │ sha256:ccd634 │ 24.7MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ docker.io/kicbase/echo-server               │ functional-652709  │ sha256:ce2d2c │ 2.17MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0     │ sha256:404c2e │ 22.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ docker.io/library/minikube-local-cache-test │ functional-652709  │ sha256:3e30c5 │ 991B   │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0     │ sha256:68b5f7 │ 20.7MB │
│ registry.k8s.io/coredns/coredns             │ v1.13.1            │ sha256:e08f4d │ 21.2MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0     │ sha256:163787 │ 15.4MB │
│ localhost/my-image                          │ functional-652709  │ sha256:a36d03 │ 831kB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-652709 image ls --format table --alsologtostderr:
I1213 10:52:23.567558  379278 out.go:360] Setting OutFile to fd 1 ...
I1213 10:52:23.567729  379278 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:52:23.567758  379278 out.go:374] Setting ErrFile to fd 2...
I1213 10:52:23.567786  379278 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:52:23.568189  379278 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
I1213 10:52:23.569563  379278 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 10:52:23.569828  379278 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 10:52:23.570428  379278 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
I1213 10:52:23.587645  379278 ssh_runner.go:195] Run: systemctl --version
I1213 10:52:23.587702  379278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
I1213 10:52:23.605845  379278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
I1213 10:52:23.711216  379278 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-652709 image ls --format json --alsologtostderr:
[{"id":"sha256:3e30c52a5eb43a8e5ba840b7293fbdeceebf98349701321a36a877e21e3b575a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-652709"],"size":"991"},{"id":"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"22429671"},{"id":"sha256:a36d03f7ad105f0365fa04ea21effca6a15cb42d29181b995439b46b977d5500","repoDigests":[],"repoTags":["localhost/my-image:functional-652709"],"size":"830618"},{"id":"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"21168808"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/
etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21136588"},{"id":"sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"15391364"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4
f33377538efa3663a40079642e144146d0246e58"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"24678359"},{"id":"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"20661043"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:ce2d2cda2d858fdaea84129de
b86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-652709"],"size":"2173567"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-652709 image ls --format json --alsologtostderr:
I1213 10:52:23.341697  379241 out.go:360] Setting OutFile to fd 1 ...
I1213 10:52:23.341810  379241 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:52:23.341821  379241 out.go:374] Setting ErrFile to fd 2...
I1213 10:52:23.341826  379241 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:52:23.342087  379241 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
I1213 10:52:23.342738  379241 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 10:52:23.342866  379241 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 10:52:23.343374  379241 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
I1213 10:52:23.360733  379241 ssh_runner.go:195] Run: systemctl --version
I1213 10:52:23.360790  379241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
I1213 10:52:23.379048  379241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
I1213 10:52:23.481403  379241 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-652709 image ls --format yaml --alsologtostderr:
- id: sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "20661043"
- id: sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "15391364"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-652709
size: "2173567"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "21168808"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"
- id: sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "24678359"
- id: sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "22429671"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3e30c52a5eb43a8e5ba840b7293fbdeceebf98349701321a36a877e21e3b575a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-652709
size: "991"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-652709 image ls --format yaml --alsologtostderr:
I1213 10:52:19.582880  378920 out.go:360] Setting OutFile to fd 1 ...
I1213 10:52:19.582994  378920 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:52:19.583005  378920 out.go:374] Setting ErrFile to fd 2...
I1213 10:52:19.583011  378920 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:52:19.583265  378920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
I1213 10:52:19.583864  378920 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 10:52:19.583983  378920 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 10:52:19.584511  378920 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
I1213 10:52:19.605984  378920 ssh_runner.go:195] Run: systemctl --version
I1213 10:52:19.606041  378920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
I1213 10:52:19.623830  378920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
I1213 10:52:19.730418  378920 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-652709 ssh pgrep buildkitd: exit status 1 (309.621216ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 image build -t localhost/my-image:functional-652709 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-652709 image build -t localhost/my-image:functional-652709 testdata/build --alsologtostderr: (2.978139868s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-652709 image build -t localhost/my-image:functional-652709 testdata/build --alsologtostderr:
I1213 10:52:20.128973  379026 out.go:360] Setting OutFile to fd 1 ...
I1213 10:52:20.129168  379026 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:52:20.129198  379026 out.go:374] Setting ErrFile to fd 2...
I1213 10:52:20.129217  379026 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:52:20.129503  379026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
I1213 10:52:20.130221  379026 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 10:52:20.130981  379026 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 10:52:20.131567  379026 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
I1213 10:52:20.149859  379026 ssh_runner.go:195] Run: systemctl --version
I1213 10:52:20.149924  379026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
I1213 10:52:20.169049  379026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
I1213 10:52:20.273639  379026 build_images.go:162] Building image from path: /tmp/build.276099143.tar
I1213 10:52:20.273718  379026 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 10:52:20.282610  379026 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.276099143.tar
I1213 10:52:20.286910  379026 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.276099143.tar: stat -c "%s %y" /var/lib/minikube/build/build.276099143.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.276099143.tar': No such file or directory
I1213 10:52:20.286939  379026 ssh_runner.go:362] scp /tmp/build.276099143.tar --> /var/lib/minikube/build/build.276099143.tar (3072 bytes)
I1213 10:52:20.305451  379026 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.276099143
I1213 10:52:20.313070  379026 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.276099143 -xf /var/lib/minikube/build/build.276099143.tar
I1213 10:52:20.321142  379026 containerd.go:394] Building image: /var/lib/minikube/build/build.276099143
I1213 10:52:20.321213  379026 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.276099143 --local dockerfile=/var/lib/minikube/build/build.276099143 --output type=image,name=localhost/my-image:functional-652709
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:0302b810f7b02fb235fc59378ab5d5cdf34bad31e2f12fa4ddf2795a85e1cddb 0.0s done
#8 exporting config sha256:a36d03f7ad105f0365fa04ea21effca6a15cb42d29181b995439b46b977d5500 0.0s done
#8 naming to localhost/my-image:functional-652709 done
#8 DONE 0.2s
I1213 10:52:23.029021  379026 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.276099143 --local dockerfile=/var/lib/minikube/build/build.276099143 --output type=image,name=localhost/my-image:functional-652709: (2.707775229s)
I1213 10:52:23.029094  379026 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.276099143
I1213 10:52:23.037559  379026 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.276099143.tar
I1213 10:52:23.045584  379026 build_images.go:218] Built localhost/my-image:functional-652709 from /tmp/build.276099143.tar
I1213 10:52:23.045619  379026 build_images.go:134] succeeded building to: functional-652709
I1213 10:52:23.045625  379026 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-652709
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 image load --daemon kicbase/echo-server:functional-652709 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 image load --daemon kicbase/echo-server:functional-652709 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-652709
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 image load --daemon kicbase/echo-server:functional-652709 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 image save kicbase/echo-server:functional-652709 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 image rm kicbase/echo-server:functional-652709 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-652709
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 image save --daemon kicbase/echo-server:functional-652709 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-652709
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-652709 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-652709
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-652709
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-652709
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (154.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1213 10:54:47.368518  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:54:47.374837  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:54:47.386152  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:54:47.407480  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:54:47.448818  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:54:47.530196  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:54:47.691520  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:54:48.013179  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:54:48.654987  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:54:49.936274  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:54:52.498014  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:54:57.619917  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:55:07.861515  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:55:12.241136  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:55:28.343349  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:56:09.305077  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-063724 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m33.698702181s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (154.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-063724 kubectl -- rollout status deployment/busybox: (4.288818541s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 kubectl -- exec busybox-7b57f96db7-d4rjs -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 kubectl -- exec busybox-7b57f96db7-rjftd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 kubectl -- exec busybox-7b57f96db7-zv2wk -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 kubectl -- exec busybox-7b57f96db7-d4rjs -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 kubectl -- exec busybox-7b57f96db7-rjftd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 kubectl -- exec busybox-7b57f96db7-zv2wk -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 kubectl -- exec busybox-7b57f96db7-d4rjs -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 kubectl -- exec busybox-7b57f96db7-rjftd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 kubectl -- exec busybox-7b57f96db7-zv2wk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 kubectl -- exec busybox-7b57f96db7-d4rjs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 kubectl -- exec busybox-7b57f96db7-d4rjs -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 kubectl -- exec busybox-7b57f96db7-rjftd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 kubectl -- exec busybox-7b57f96db7-rjftd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 kubectl -- exec busybox-7b57f96db7-zv2wk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 kubectl -- exec busybox-7b57f96db7-zv2wk -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (31.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 node add --alsologtostderr -v 5
E1213 10:56:48.080323  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-063724 node add --alsologtostderr -v 5: (30.446066343s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-063724 status --alsologtostderr -v 5: (1.108334883s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (31.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-063724 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.128003729s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-063724 status --output json --alsologtostderr -v 5: (1.127063337s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 cp testdata/cp-test.txt ha-063724:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 cp ha-063724:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3169884350/001/cp-test_ha-063724.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 cp ha-063724:/home/docker/cp-test.txt ha-063724-m02:/home/docker/cp-test_ha-063724_ha-063724-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724-m02 "sudo cat /home/docker/cp-test_ha-063724_ha-063724-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 cp ha-063724:/home/docker/cp-test.txt ha-063724-m03:/home/docker/cp-test_ha-063724_ha-063724-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724-m03 "sudo cat /home/docker/cp-test_ha-063724_ha-063724-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 cp ha-063724:/home/docker/cp-test.txt ha-063724-m04:/home/docker/cp-test_ha-063724_ha-063724-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724-m04 "sudo cat /home/docker/cp-test_ha-063724_ha-063724-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 cp testdata/cp-test.txt ha-063724-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 cp ha-063724-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3169884350/001/cp-test_ha-063724-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 cp ha-063724-m02:/home/docker/cp-test.txt ha-063724:/home/docker/cp-test_ha-063724-m02_ha-063724.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724 "sudo cat /home/docker/cp-test_ha-063724-m02_ha-063724.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 cp ha-063724-m02:/home/docker/cp-test.txt ha-063724-m03:/home/docker/cp-test_ha-063724-m02_ha-063724-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724-m03 "sudo cat /home/docker/cp-test_ha-063724-m02_ha-063724-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 cp ha-063724-m02:/home/docker/cp-test.txt ha-063724-m04:/home/docker/cp-test_ha-063724-m02_ha-063724-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724-m04 "sudo cat /home/docker/cp-test_ha-063724-m02_ha-063724-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 cp testdata/cp-test.txt ha-063724-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 cp ha-063724-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3169884350/001/cp-test_ha-063724-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 cp ha-063724-m03:/home/docker/cp-test.txt ha-063724:/home/docker/cp-test_ha-063724-m03_ha-063724.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724 "sudo cat /home/docker/cp-test_ha-063724-m03_ha-063724.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 cp ha-063724-m03:/home/docker/cp-test.txt ha-063724-m02:/home/docker/cp-test_ha-063724-m03_ha-063724-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724-m02 "sudo cat /home/docker/cp-test_ha-063724-m03_ha-063724-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 cp ha-063724-m03:/home/docker/cp-test.txt ha-063724-m04:/home/docker/cp-test_ha-063724-m03_ha-063724-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724-m04 "sudo cat /home/docker/cp-test_ha-063724-m03_ha-063724-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 cp testdata/cp-test.txt ha-063724-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 cp ha-063724-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3169884350/001/cp-test_ha-063724-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 cp ha-063724-m04:/home/docker/cp-test.txt ha-063724:/home/docker/cp-test_ha-063724-m04_ha-063724.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724 "sudo cat /home/docker/cp-test_ha-063724-m04_ha-063724.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 cp ha-063724-m04:/home/docker/cp-test.txt ha-063724-m02:/home/docker/cp-test_ha-063724-m04_ha-063724-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724-m02 "sudo cat /home/docker/cp-test_ha-063724-m04_ha-063724-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 cp ha-063724-m04:/home/docker/cp-test.txt ha-063724-m03:/home/docker/cp-test_ha-063724-m04_ha-063724-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 ssh -n ha-063724-m03 "sudo cat /home/docker/cp-test_ha-063724-m04_ha-063724-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 node stop m02 --alsologtostderr -v 5
E1213 10:57:31.226632  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-063724 node stop m02 --alsologtostderr -v 5: (12.247091748s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-063724 status --alsologtostderr -v 5: exit status 7 (815.350351ms)

                                                
                                                
-- stdout --
	ha-063724
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-063724-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-063724-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-063724-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:57:40.689045  396635 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:57:40.689163  396635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:57:40.689174  396635 out.go:374] Setting ErrFile to fd 2...
	I1213 10:57:40.689178  396635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:57:40.689418  396635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 10:57:40.689602  396635 out.go:368] Setting JSON to false
	I1213 10:57:40.689684  396635 mustload.go:66] Loading cluster: ha-063724
	I1213 10:57:40.689773  396635 notify.go:221] Checking for updates...
	I1213 10:57:40.690128  396635 config.go:182] Loaded profile config "ha-063724": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 10:57:40.690152  396635 status.go:174] checking status of ha-063724 ...
	I1213 10:57:40.690650  396635 cli_runner.go:164] Run: docker container inspect ha-063724 --format={{.State.Status}}
	I1213 10:57:40.711547  396635 status.go:371] ha-063724 host status = "Running" (err=<nil>)
	I1213 10:57:40.711572  396635 host.go:66] Checking if "ha-063724" exists ...
	I1213 10:57:40.711881  396635 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-063724
	I1213 10:57:40.744557  396635 host.go:66] Checking if "ha-063724" exists ...
	I1213 10:57:40.744867  396635 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:57:40.744914  396635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-063724
	I1213 10:57:40.771303  396635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33130 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/ha-063724/id_rsa Username:docker}
	I1213 10:57:40.876350  396635 ssh_runner.go:195] Run: systemctl --version
	I1213 10:57:40.884321  396635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:57:40.896966  396635 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:57:40.966516  396635 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-13 10:57:40.955831691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:57:40.967165  396635 kubeconfig.go:125] found "ha-063724" server: "https://192.168.49.254:8443"
	I1213 10:57:40.967193  396635 api_server.go:166] Checking apiserver status ...
	I1213 10:57:40.967235  396635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:57:40.981161  396635 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1469/cgroup
	I1213 10:57:40.989521  396635 api_server.go:182] apiserver freezer: "12:freezer:/docker/cd45e3488819617f02d7d56f9bd023a93c7f86d44c2a9d849f1280c13f58b1ca/kubepods/burstable/pod92613fc37e2de8b2f56ee2b98763878c/69e542f5c76c96da09f659d5025b4cd6412e837a6b8c90dc7a803bf9ad8191c1"
	I1213 10:57:40.989592  396635 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cd45e3488819617f02d7d56f9bd023a93c7f86d44c2a9d849f1280c13f58b1ca/kubepods/burstable/pod92613fc37e2de8b2f56ee2b98763878c/69e542f5c76c96da09f659d5025b4cd6412e837a6b8c90dc7a803bf9ad8191c1/freezer.state
	I1213 10:57:40.996949  396635 api_server.go:204] freezer state: "THAWED"
	I1213 10:57:40.996977  396635 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1213 10:57:41.005664  396635 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1213 10:57:41.005697  396635 status.go:463] ha-063724 apiserver status = Running (err=<nil>)
	I1213 10:57:41.005710  396635 status.go:176] ha-063724 status: &{Name:ha-063724 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 10:57:41.005728  396635 status.go:174] checking status of ha-063724-m02 ...
	I1213 10:57:41.006096  396635 cli_runner.go:164] Run: docker container inspect ha-063724-m02 --format={{.State.Status}}
	I1213 10:57:41.024556  396635 status.go:371] ha-063724-m02 host status = "Stopped" (err=<nil>)
	I1213 10:57:41.024576  396635 status.go:384] host is not running, skipping remaining checks
	I1213 10:57:41.024582  396635 status.go:176] ha-063724-m02 status: &{Name:ha-063724-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 10:57:41.024602  396635 status.go:174] checking status of ha-063724-m03 ...
	I1213 10:57:41.024936  396635 cli_runner.go:164] Run: docker container inspect ha-063724-m03 --format={{.State.Status}}
	I1213 10:57:41.044612  396635 status.go:371] ha-063724-m03 host status = "Running" (err=<nil>)
	I1213 10:57:41.044960  396635 host.go:66] Checking if "ha-063724-m03" exists ...
	I1213 10:57:41.046513  396635 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-063724-m03
	I1213 10:57:41.064901  396635 host.go:66] Checking if "ha-063724-m03" exists ...
	I1213 10:57:41.065221  396635 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:57:41.065259  396635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-063724-m03
	I1213 10:57:41.083147  396635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/ha-063724-m03/id_rsa Username:docker}
	I1213 10:57:41.188085  396635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:57:41.201555  396635 kubeconfig.go:125] found "ha-063724" server: "https://192.168.49.254:8443"
	I1213 10:57:41.201590  396635 api_server.go:166] Checking apiserver status ...
	I1213 10:57:41.201639  396635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:57:41.218795  396635 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1369/cgroup
	I1213 10:57:41.235163  396635 api_server.go:182] apiserver freezer: "12:freezer:/docker/228ad72e972fb51b7f4cd28fedfdc261d401737737cc8e055de3274dc0ad7209/kubepods/burstable/pod0f945781b690803f3df88d97264a81a0/baa1bd26e524e523ab6a646d24c97ac6c2248dd68378f2f98f8d5eb9030d0714"
	I1213 10:57:41.235242  396635 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/228ad72e972fb51b7f4cd28fedfdc261d401737737cc8e055de3274dc0ad7209/kubepods/burstable/pod0f945781b690803f3df88d97264a81a0/baa1bd26e524e523ab6a646d24c97ac6c2248dd68378f2f98f8d5eb9030d0714/freezer.state
	I1213 10:57:41.244533  396635 api_server.go:204] freezer state: "THAWED"
	I1213 10:57:41.244563  396635 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1213 10:57:41.252931  396635 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1213 10:57:41.252962  396635 status.go:463] ha-063724-m03 apiserver status = Running (err=<nil>)
	I1213 10:57:41.252972  396635 status.go:176] ha-063724-m03 status: &{Name:ha-063724-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 10:57:41.252989  396635 status.go:174] checking status of ha-063724-m04 ...
	I1213 10:57:41.253321  396635 cli_runner.go:164] Run: docker container inspect ha-063724-m04 --format={{.State.Status}}
	I1213 10:57:41.276568  396635 status.go:371] ha-063724-m04 host status = "Running" (err=<nil>)
	I1213 10:57:41.276596  396635 host.go:66] Checking if "ha-063724-m04" exists ...
	I1213 10:57:41.276914  396635 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-063724-m04
	I1213 10:57:41.295944  396635 host.go:66] Checking if "ha-063724-m04" exists ...
	I1213 10:57:41.296254  396635 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:57:41.296303  396635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-063724-m04
	I1213 10:57:41.315859  396635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/ha-063724-m04/id_rsa Username:docker}
	I1213 10:57:41.425231  396635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:57:41.443244  396635 status.go:176] ha-063724-m04 status: &{Name:ha-063724-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (13.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-063724 node start m02 --alsologtostderr -v 5: (11.40321543s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-063724 status --alsologtostderr -v 5: (1.614035015s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (13.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.374760114s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (102.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-063724 stop --alsologtostderr -v 5: (37.776349077s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-063724 start --wait true --alsologtostderr -v 5: (1m4.426246502s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (102.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 node delete m03 --alsologtostderr -v 5
E1213 10:59:47.364889  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-063724 node delete m03 --alsologtostderr -v 5: (10.54376797s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E1213 10:59:51.158081  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 stop --alsologtostderr -v 5
E1213 11:00:12.241083  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:00:15.068385  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-063724 stop --alsologtostderr -v 5: (36.193264012s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-063724 status --alsologtostderr -v 5: exit status 7 (120.089183ms)

                                                
                                                
-- stdout --
	ha-063724
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-063724-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-063724-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:00:27.828376  411556 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:00:27.828510  411556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:00:27.828521  411556 out.go:374] Setting ErrFile to fd 2...
	I1213 11:00:27.828527  411556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:00:27.828770  411556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 11:00:27.828961  411556 out.go:368] Setting JSON to false
	I1213 11:00:27.829006  411556 mustload.go:66] Loading cluster: ha-063724
	I1213 11:00:27.829083  411556 notify.go:221] Checking for updates...
	I1213 11:00:27.830454  411556 config.go:182] Loaded profile config "ha-063724": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 11:00:27.830490  411556 status.go:174] checking status of ha-063724 ...
	I1213 11:00:27.831036  411556 cli_runner.go:164] Run: docker container inspect ha-063724 --format={{.State.Status}}
	I1213 11:00:27.848678  411556 status.go:371] ha-063724 host status = "Stopped" (err=<nil>)
	I1213 11:00:27.848701  411556 status.go:384] host is not running, skipping remaining checks
	I1213 11:00:27.848709  411556 status.go:176] ha-063724 status: &{Name:ha-063724 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:00:27.848743  411556 status.go:174] checking status of ha-063724-m02 ...
	I1213 11:00:27.849063  411556 cli_runner.go:164] Run: docker container inspect ha-063724-m02 --format={{.State.Status}}
	I1213 11:00:27.872893  411556 status.go:371] ha-063724-m02 host status = "Stopped" (err=<nil>)
	I1213 11:00:27.872918  411556 status.go:384] host is not running, skipping remaining checks
	I1213 11:00:27.872925  411556 status.go:176] ha-063724-m02 status: &{Name:ha-063724-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:00:27.872945  411556 status.go:174] checking status of ha-063724-m04 ...
	I1213 11:00:27.873239  411556 cli_runner.go:164] Run: docker container inspect ha-063724-m04 --format={{.State.Status}}
	I1213 11:00:27.892489  411556 status.go:371] ha-063724-m04 host status = "Stopped" (err=<nil>)
	I1213 11:00:27.892509  411556 status.go:384] host is not running, skipping remaining checks
	I1213 11:00:27.892525  411556 status.go:176] ha-063724-m04 status: &{Name:ha-063724-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (68.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-063724 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m7.312578012s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (68.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (53.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 node add --control-plane --alsologtostderr -v 5
E1213 11:01:48.080734  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-063724 node add --control-plane --alsologtostderr -v 5: (52.741781208s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-063724 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-063724 status --alsologtostderr -v 5: (1.213079464s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (53.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.100726263s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.10s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.59s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-779251 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-779251 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (55.589356766s)
--- PASS: TestJSONOutput/start/Command (55.59s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-779251 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-779251 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.98s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-779251 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-779251 --output=json --user=testUser: (5.981263573s)
--- PASS: TestJSONOutput/stop/Command (5.98s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-081883 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-081883 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (94.888192ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"18b2d4b8-b6d1-4c89-9f15-d15fb787e404","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-081883] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5d89368a-2386-4e57-ac01-10dce451c122","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22127"}}
	{"specversion":"1.0","id":"fa929a10-1930-4d54-a420-ae40a68c667c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1fae22cc-bc81-4f26-b310-5ec2fa01d15e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig"}}
	{"specversion":"1.0","id":"17434a8d-ccea-4ab4-998d-511cfe35f8a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube"}}
	{"specversion":"1.0","id":"ff5af229-c10f-4c78-bcf1-2fe63f8b349c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"66a6c6df-2467-44f7-9191-a3c343a759d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c14d1dbe-4743-4746-8922-581a75158815","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-081883" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-081883
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.76s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-269584 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-269584 --network=: (37.532813289s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-269584" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-269584
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-269584: (2.203266598s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.76s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.22s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-413366 --network=bridge
E1213 11:04:47.366837  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-413366 --network=bridge: (32.965259836s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-413366" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-413366
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-413366: (2.230631792s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.22s)

                                                
                                    
x
+
TestKicExistingNetwork (35.7s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1213 11:05:04.619905  308915 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1213 11:05:04.638453  308915 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1213 11:05:04.638528  308915 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1213 11:05:04.638546  308915 cli_runner.go:164] Run: docker network inspect existing-network
W1213 11:05:04.655089  308915 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1213 11:05:04.655120  308915 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1213 11:05:04.655133  308915 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1213 11:05:04.655241  308915 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1213 11:05:04.673719  308915 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-381e4ce3c9ab IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:2d:23:57:0e:cc} reservation:<nil>}
I1213 11:05:04.674050  308915 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4004e99160}
I1213 11:05:04.674071  308915 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1213 11:05:04.674123  308915 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1213 11:05:04.745981  308915 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-005908 --network=existing-network
E1213 11:05:12.242059  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-005908 --network=existing-network: (33.311963315s)
helpers_test.go:176: Cleaning up "existing-network-005908" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-005908
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-005908: (2.227541578s)
I1213 11:05:40.302492  308915 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.70s)

                                                
                                    
x
+
TestKicCustomSubnet (38.59s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-141057 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-141057 --subnet=192.168.60.0/24: (36.293149512s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-141057 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-141057" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-141057
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-141057: (2.279163836s)
--- PASS: TestKicCustomSubnet (38.59s)

                                                
                                    
x
+
TestKicStaticIP (35.66s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-194095 --static-ip=192.168.200.200
E1213 11:06:48.080768  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-194095 --static-ip=192.168.200.200: (33.260346526s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-194095 ip
helpers_test.go:176: Cleaning up "static-ip-194095" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-194095
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-194095: (2.243858829s)
--- PASS: TestKicStaticIP (35.66s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (71.96s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-633189 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-633189 --driver=docker  --container-runtime=containerd: (30.824477233s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-635700 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-635700 --driver=docker  --container-runtime=containerd: (35.024818798s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-633189
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-635700
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-635700" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-635700
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-635700: (2.099387228s)
helpers_test.go:176: Cleaning up "first-633189" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-633189
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-633189: (2.427880903s)
--- PASS: TestMinikubeProfile (71.96s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.45s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-097281 --memory=3072 --mount-string /tmp/TestMountStartserial4182814185/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-097281 --memory=3072 --mount-string /tmp/TestMountStartserial4182814185/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.449729081s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-097281 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-099004 --memory=3072 --mount-string /tmp/TestMountStartserial4182814185/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-099004 --memory=3072 --mount-string /tmp/TestMountStartserial4182814185/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.627014967s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-099004 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-097281 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-097281 --alsologtostderr -v=5: (1.717616256s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-099004 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-099004
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-099004: (1.288345953s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.71s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-099004
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-099004: (6.713427573s)
--- PASS: TestMountStart/serial/RestartStopped (7.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-099004 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (81.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-378027 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1213 11:09:47.365040  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:09:55.319178  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-378027 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m20.805741169s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (81.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-378027 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-378027 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-378027 -- rollout status deployment/busybox: (4.664777201s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-378027 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-378027 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-378027 -- exec busybox-7b57f96db7-8dfkh -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-378027 -- exec busybox-7b57f96db7-q6865 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-378027 -- exec busybox-7b57f96db7-8dfkh -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-378027 -- exec busybox-7b57f96db7-q6865 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-378027 -- exec busybox-7b57f96db7-8dfkh -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-378027 -- exec busybox-7b57f96db7-q6865 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.55s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-378027 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-378027 -- exec busybox-7b57f96db7-8dfkh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-378027 -- exec busybox-7b57f96db7-8dfkh -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-378027 -- exec busybox-7b57f96db7-q6865 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-378027 -- exec busybox-7b57f96db7-q6865 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (29.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-378027 -v=5 --alsologtostderr
E1213 11:10:12.241498  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-378027 -v=5 --alsologtostderr: (29.090485218s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (29.82s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-378027 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 cp testdata/cp-test.txt multinode-378027:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 ssh -n multinode-378027 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 cp multinode-378027:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile917245170/001/cp-test_multinode-378027.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 ssh -n multinode-378027 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 cp multinode-378027:/home/docker/cp-test.txt multinode-378027-m02:/home/docker/cp-test_multinode-378027_multinode-378027-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 ssh -n multinode-378027 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 ssh -n multinode-378027-m02 "sudo cat /home/docker/cp-test_multinode-378027_multinode-378027-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 cp multinode-378027:/home/docker/cp-test.txt multinode-378027-m03:/home/docker/cp-test_multinode-378027_multinode-378027-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 ssh -n multinode-378027 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 ssh -n multinode-378027-m03 "sudo cat /home/docker/cp-test_multinode-378027_multinode-378027-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 cp testdata/cp-test.txt multinode-378027-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 ssh -n multinode-378027-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 cp multinode-378027-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile917245170/001/cp-test_multinode-378027-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 ssh -n multinode-378027-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 cp multinode-378027-m02:/home/docker/cp-test.txt multinode-378027:/home/docker/cp-test_multinode-378027-m02_multinode-378027.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 ssh -n multinode-378027-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 ssh -n multinode-378027 "sudo cat /home/docker/cp-test_multinode-378027-m02_multinode-378027.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 cp multinode-378027-m02:/home/docker/cp-test.txt multinode-378027-m03:/home/docker/cp-test_multinode-378027-m02_multinode-378027-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 ssh -n multinode-378027-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 ssh -n multinode-378027-m03 "sudo cat /home/docker/cp-test_multinode-378027-m02_multinode-378027-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 cp testdata/cp-test.txt multinode-378027-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 ssh -n multinode-378027-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 cp multinode-378027-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile917245170/001/cp-test_multinode-378027-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 ssh -n multinode-378027-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 cp multinode-378027-m03:/home/docker/cp-test.txt multinode-378027:/home/docker/cp-test_multinode-378027-m03_multinode-378027.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 ssh -n multinode-378027-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 ssh -n multinode-378027 "sudo cat /home/docker/cp-test_multinode-378027-m03_multinode-378027.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 cp multinode-378027-m03:/home/docker/cp-test.txt multinode-378027-m02:/home/docker/cp-test_multinode-378027-m03_multinode-378027-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 ssh -n multinode-378027-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 ssh -n multinode-378027-m02 "sudo cat /home/docker/cp-test_multinode-378027-m03_multinode-378027-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.92s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-378027 node stop m03: (1.323802712s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-378027 status: exit status 7 (550.196456ms)

                                                
                                                
-- stdout --
	multinode-378027
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-378027-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-378027-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-378027 status --alsologtostderr: exit status 7 (545.418941ms)

                                                
                                                
-- stdout --
	multinode-378027
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-378027-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-378027-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:10:49.815353  465333 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:10:49.815485  465333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:10:49.815497  465333 out.go:374] Setting ErrFile to fd 2...
	I1213 11:10:49.815503  465333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:10:49.815853  465333 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 11:10:49.816073  465333 out.go:368] Setting JSON to false
	I1213 11:10:49.816123  465333 mustload.go:66] Loading cluster: multinode-378027
	I1213 11:10:49.816212  465333 notify.go:221] Checking for updates...
	I1213 11:10:49.817507  465333 config.go:182] Loaded profile config "multinode-378027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 11:10:49.817588  465333 status.go:174] checking status of multinode-378027 ...
	I1213 11:10:49.819105  465333 cli_runner.go:164] Run: docker container inspect multinode-378027 --format={{.State.Status}}
	I1213 11:10:49.839672  465333 status.go:371] multinode-378027 host status = "Running" (err=<nil>)
	I1213 11:10:49.839695  465333 host.go:66] Checking if "multinode-378027" exists ...
	I1213 11:10:49.840001  465333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-378027
	I1213 11:10:49.860351  465333 host.go:66] Checking if "multinode-378027" exists ...
	I1213 11:10:49.860714  465333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:10:49.860761  465333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-378027
	I1213 11:10:49.877502  465333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33250 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/multinode-378027/id_rsa Username:docker}
	I1213 11:10:49.980184  465333 ssh_runner.go:195] Run: systemctl --version
	I1213 11:10:49.986760  465333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:10:49.999989  465333 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:10:50.071882  465333 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-13 11:10:50.061461136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:10:50.072445  465333 kubeconfig.go:125] found "multinode-378027" server: "https://192.168.67.2:8443"
	I1213 11:10:50.072478  465333 api_server.go:166] Checking apiserver status ...
	I1213 11:10:50.072521  465333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:10:50.087098  465333 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup
	I1213 11:10:50.097416  465333 api_server.go:182] apiserver freezer: "12:freezer:/docker/9eef73c681bb457c53e0082bf174fbdb0baf21a7a9cda84553fd50046b0c0f2b/kubepods/burstable/pod0625647c86b216bbb39fdbc79270d544/70899bfd17ec188da3e62928202a2f86e972713bfdf5e5bae5cdb8635b1a0a47"
	I1213 11:10:50.097499  465333 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9eef73c681bb457c53e0082bf174fbdb0baf21a7a9cda84553fd50046b0c0f2b/kubepods/burstable/pod0625647c86b216bbb39fdbc79270d544/70899bfd17ec188da3e62928202a2f86e972713bfdf5e5bae5cdb8635b1a0a47/freezer.state
	I1213 11:10:50.106212  465333 api_server.go:204] freezer state: "THAWED"
	I1213 11:10:50.106241  465333 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1213 11:10:50.114539  465333 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1213 11:10:50.114573  465333 status.go:463] multinode-378027 apiserver status = Running (err=<nil>)
	I1213 11:10:50.114584  465333 status.go:176] multinode-378027 status: &{Name:multinode-378027 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:10:50.114601  465333 status.go:174] checking status of multinode-378027-m02 ...
	I1213 11:10:50.114964  465333 cli_runner.go:164] Run: docker container inspect multinode-378027-m02 --format={{.State.Status}}
	I1213 11:10:50.132277  465333 status.go:371] multinode-378027-m02 host status = "Running" (err=<nil>)
	I1213 11:10:50.132302  465333 host.go:66] Checking if "multinode-378027-m02" exists ...
	I1213 11:10:50.132602  465333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-378027-m02
	I1213 11:10:50.150062  465333 host.go:66] Checking if "multinode-378027-m02" exists ...
	I1213 11:10:50.150412  465333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:10:50.150462  465333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-378027-m02
	I1213 11:10:50.168486  465333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33255 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/multinode-378027-m02/id_rsa Username:docker}
	I1213 11:10:50.272011  465333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:10:50.285102  465333 status.go:176] multinode-378027-m02 status: &{Name:multinode-378027-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:10:50.285136  465333 status.go:174] checking status of multinode-378027-m03 ...
	I1213 11:10:50.285618  465333 cli_runner.go:164] Run: docker container inspect multinode-378027-m03 --format={{.State.Status}}
	I1213 11:10:50.303978  465333 status.go:371] multinode-378027-m03 host status = "Stopped" (err=<nil>)
	I1213 11:10:50.304001  465333 status.go:384] host is not running, skipping remaining checks
	I1213 11:10:50.304009  465333 status.go:176] multinode-378027-m03 status: &{Name:multinode-378027-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-378027 node start m03 -v=5 --alsologtostderr: (7.063915863s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-378027
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-378027
E1213 11:11:10.430836  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-378027: (25.223899183s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-378027 --wait=true -v=5 --alsologtostderr
E1213 11:11:48.079850  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-378027 --wait=true -v=5 --alsologtostderr: (56.63338448s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-378027
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-378027 node delete m03: (4.994598341s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-378027 stop: (23.949910754s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-378027 status: exit status 7 (87.144018ms)

                                                
                                                
-- stdout --
	multinode-378027
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-378027-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-378027 status --alsologtostderr: exit status 7 (92.499901ms)

                                                
                                                
-- stdout --
	multinode-378027
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-378027-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:12:49.951515  474173 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:12:49.951632  474173 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:12:49.951646  474173 out.go:374] Setting ErrFile to fd 2...
	I1213 11:12:49.951652  474173 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:12:49.951924  474173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 11:12:49.952108  474173 out.go:368] Setting JSON to false
	I1213 11:12:49.952157  474173 mustload.go:66] Loading cluster: multinode-378027
	I1213 11:12:49.952233  474173 notify.go:221] Checking for updates...
	I1213 11:12:49.953244  474173 config.go:182] Loaded profile config "multinode-378027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 11:12:49.953277  474173 status.go:174] checking status of multinode-378027 ...
	I1213 11:12:49.953794  474173 cli_runner.go:164] Run: docker container inspect multinode-378027 --format={{.State.Status}}
	I1213 11:12:49.973798  474173 status.go:371] multinode-378027 host status = "Stopped" (err=<nil>)
	I1213 11:12:49.973823  474173 status.go:384] host is not running, skipping remaining checks
	I1213 11:12:49.973830  474173 status.go:176] multinode-378027 status: &{Name:multinode-378027 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:12:49.973863  474173 status.go:174] checking status of multinode-378027-m02 ...
	I1213 11:12:49.974191  474173 cli_runner.go:164] Run: docker container inspect multinode-378027-m02 --format={{.State.Status}}
	I1213 11:12:49.995728  474173 status.go:371] multinode-378027-m02 host status = "Stopped" (err=<nil>)
	I1213 11:12:49.995756  474173 status.go:384] host is not running, skipping remaining checks
	I1213 11:12:49.995764  474173 status.go:176] multinode-378027-m02 status: &{Name:multinode-378027-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-378027 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-378027 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (47.679617746s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-378027 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.39s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-378027
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-378027-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-378027-m02 --driver=docker  --container-runtime=containerd: exit status 14 (93.329095ms)

                                                
                                                
-- stdout --
	* [multinode-378027-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-378027-m02' is duplicated with machine name 'multinode-378027-m02' in profile 'multinode-378027'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-378027-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-378027-m03 --driver=docker  --container-runtime=containerd: (37.667225952s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-378027
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-378027: exit status 80 (337.011306ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-378027 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-378027-m03 already exists in multinode-378027-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-378027-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-378027-m03: (2.528492331s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.68s)

                                                
                                    
x
+
TestPreload (118.59s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-313184 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
E1213 11:14:47.365177  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:15:12.241028  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-313184 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (58.035276803s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-313184 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-313184 image pull gcr.io/k8s-minikube/busybox: (2.212588295s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-313184
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-313184: (5.917125054s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-313184 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-313184 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (49.672259512s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-313184 image list
helpers_test.go:176: Cleaning up "test-preload-313184" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-313184
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-313184: (2.50937969s)
--- PASS: TestPreload (118.59s)

                                                
                                    
x
+
TestScheduledStopUnix (109.16s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-488347 --memory=3072 --driver=docker  --container-runtime=containerd
E1213 11:16:31.161087  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:16:48.079835  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-488347 --memory=3072 --driver=docker  --container-runtime=containerd: (31.788959817s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-488347 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 11:16:53.705089  490155 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:16:53.705273  490155 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:16:53.705283  490155 out.go:374] Setting ErrFile to fd 2...
	I1213 11:16:53.705289  490155 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:16:53.705532  490155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 11:16:53.705786  490155 out.go:368] Setting JSON to false
	I1213 11:16:53.705903  490155 mustload.go:66] Loading cluster: scheduled-stop-488347
	I1213 11:16:53.706312  490155 config.go:182] Loaded profile config "scheduled-stop-488347": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 11:16:53.706390  490155 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/scheduled-stop-488347/config.json ...
	I1213 11:16:53.706581  490155 mustload.go:66] Loading cluster: scheduled-stop-488347
	I1213 11:16:53.706746  490155 config.go:182] Loaded profile config "scheduled-stop-488347": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-488347 -n scheduled-stop-488347
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-488347 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 11:16:54.189409  490246 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:16:54.189528  490246 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:16:54.189537  490246 out.go:374] Setting ErrFile to fd 2...
	I1213 11:16:54.189542  490246 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:16:54.189799  490246 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 11:16:54.190043  490246 out.go:368] Setting JSON to false
	I1213 11:16:54.190238  490246 daemonize_unix.go:73] killing process 490173 as it is an old scheduled stop
	I1213 11:16:54.193935  490246 mustload.go:66] Loading cluster: scheduled-stop-488347
	I1213 11:16:54.194408  490246 config.go:182] Loaded profile config "scheduled-stop-488347": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 11:16:54.194492  490246 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/scheduled-stop-488347/config.json ...
	I1213 11:16:54.194678  490246 mustload.go:66] Loading cluster: scheduled-stop-488347
	I1213 11:16:54.195168  490246 config.go:182] Loaded profile config "scheduled-stop-488347": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1213 11:16:54.200776  308915 retry.go:31] will retry after 134.282µs: open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/scheduled-stop-488347/pid: no such file or directory
I1213 11:16:54.206774  308915 retry.go:31] will retry after 157.38µs: open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/scheduled-stop-488347/pid: no such file or directory
I1213 11:16:54.207858  308915 retry.go:31] will retry after 203.588µs: open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/scheduled-stop-488347/pid: no such file or directory
I1213 11:16:54.208977  308915 retry.go:31] will retry after 476.884µs: open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/scheduled-stop-488347/pid: no such file or directory
I1213 11:16:54.210088  308915 retry.go:31] will retry after 514.094µs: open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/scheduled-stop-488347/pid: no such file or directory
I1213 11:16:54.211177  308915 retry.go:31] will retry after 634.767µs: open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/scheduled-stop-488347/pid: no such file or directory
I1213 11:16:54.212307  308915 retry.go:31] will retry after 1.009879ms: open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/scheduled-stop-488347/pid: no such file or directory
I1213 11:16:54.213448  308915 retry.go:31] will retry after 926.429µs: open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/scheduled-stop-488347/pid: no such file or directory
I1213 11:16:54.214566  308915 retry.go:31] will retry after 2.555959ms: open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/scheduled-stop-488347/pid: no such file or directory
I1213 11:16:54.217705  308915 retry.go:31] will retry after 5.569385ms: open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/scheduled-stop-488347/pid: no such file or directory
I1213 11:16:54.223908  308915 retry.go:31] will retry after 6.952544ms: open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/scheduled-stop-488347/pid: no such file or directory
I1213 11:16:54.231157  308915 retry.go:31] will retry after 11.323416ms: open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/scheduled-stop-488347/pid: no such file or directory
I1213 11:16:54.243347  308915 retry.go:31] will retry after 14.231086ms: open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/scheduled-stop-488347/pid: no such file or directory
I1213 11:16:54.258593  308915 retry.go:31] will retry after 16.851204ms: open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/scheduled-stop-488347/pid: no such file or directory
I1213 11:16:54.275894  308915 retry.go:31] will retry after 21.212805ms: open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/scheduled-stop-488347/pid: no such file or directory
I1213 11:16:54.297318  308915 retry.go:31] will retry after 53.337667ms: open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/scheduled-stop-488347/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-488347 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-488347 -n scheduled-stop-488347
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-488347
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-488347 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 11:17:20.204108  490946 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:17:20.204222  490946 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:17:20.204233  490946 out.go:374] Setting ErrFile to fd 2...
	I1213 11:17:20.204239  490946 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:17:20.204492  490946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 11:17:20.204738  490946 out.go:368] Setting JSON to false
	I1213 11:17:20.204830  490946 mustload.go:66] Loading cluster: scheduled-stop-488347
	I1213 11:17:20.205182  490946 config.go:182] Loaded profile config "scheduled-stop-488347": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 11:17:20.205260  490946 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/scheduled-stop-488347/config.json ...
	I1213 11:17:20.205440  490946 mustload.go:66] Loading cluster: scheduled-stop-488347
	I1213 11:17:20.205555  490946 config.go:182] Loaded profile config "scheduled-stop-488347": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-488347
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-488347: exit status 7 (72.676594ms)

                                                
                                                
-- stdout --
	scheduled-stop-488347
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-488347 -n scheduled-stop-488347
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-488347 -n scheduled-stop-488347: exit status 7 (72.848739ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-488347" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-488347
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-488347: (5.65588859s)
--- PASS: TestScheduledStopUnix (109.16s)

                                                
                                    
x
+
TestInsufficientStorage (12.45s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-099070 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-099070 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.845696379s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7460f40f-bcb2-4656-bcd4-c797f32563c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-099070] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bc9a5f26-586b-491b-85c4-775491a961b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22127"}}
	{"specversion":"1.0","id":"2631d455-a5d6-4d87-ac51-0a2601a0867a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4e6aef54-533a-4556-864e-3a5401a8c3cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig"}}
	{"specversion":"1.0","id":"8f5ac4cc-6b88-4f10-ba9c-4d922a751e37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube"}}
	{"specversion":"1.0","id":"b581c4ec-a138-48cf-a6ac-9d090acf14c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"609fb1a5-1326-4051-9e8b-f5d6b93cca6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"86001a7d-6a31-4ecf-8361-70bb0fde41a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"29b02334-04ff-485e-9aaa-1cd3cede5a1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"daef9268-c2b0-4d50-aed7-d40544edc6e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1c20dd1a-08f9-4edd-9186-c0bc478c1120","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"989b3c30-1833-40d2-b869-af3405309023","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-099070\" primary control-plane node in \"insufficient-storage-099070\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1c8a4c3a-b8d6-4a54-8409-08e2ae303f28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765275396-22083 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a58703bd-bc9a-4f2c-9acc-94a7a4f36f7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"a587a72b-e00f-414c-96d8-7018fc772f9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-099070 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-099070 --output=json --layout=cluster: exit status 7 (299.807449ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-099070","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-099070","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 11:18:21.146910  492805 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-099070" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-099070 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-099070 --output=json --layout=cluster: exit status 7 (309.038662ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-099070","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-099070","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 11:18:21.456800  492872 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-099070" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig
	E1213 11:18:21.467065  492872 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/insufficient-storage-099070/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-099070" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-099070
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-099070: (1.997197386s)
--- PASS: TestInsufficientStorage (12.45s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (310.49s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2984064734 start -p running-upgrade-430053 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2984064734 start -p running-upgrade-430053 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (30.789159545s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-430053 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1213 11:26:35.322856  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:26:48.080823  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:27:50.432922  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:29:47.365046  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:30:12.241285  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-430053 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m36.800180832s)
helpers_test.go:176: Cleaning up "running-upgrade-430053" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-430053
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-430053: (1.97248376s)
--- PASS: TestRunningBinaryUpgrade (310.49s)

                                                
                                    
x
+
TestMissingContainerUpgrade (134.32s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.1332824448 start -p missing-upgrade-904423 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.1332824448 start -p missing-upgrade-904423 --memory=3072 --driver=docker  --container-runtime=containerd: (59.627990635s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-904423
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-904423
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-904423 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-904423 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m7.428004335s)
helpers_test.go:176: Cleaning up "missing-upgrade-904423" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-904423
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-904423: (4.967433118s)
--- PASS: TestMissingContainerUpgrade (134.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-965117 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-965117 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (89.878657ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-965117] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-965117 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-965117 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (43.789388018s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-965117 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-965117 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-965117 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (15.406552747s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-965117 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-965117 status -o json: exit status 2 (446.920063ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-965117","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-965117
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-965117: (2.291521603s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-965117 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-965117 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.569503739s)
--- PASS: TestNoKubernetes/serial/Start (7.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-965117 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-965117 "sudo systemctl is-active --quiet service kubelet": exit status 1 (289.58004ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-965117
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-965117: (1.294901378s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-965117 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-965117 --driver=docker  --container-runtime=containerd: (6.369968961s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-965117 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-965117 "sudo systemctl is-active --quiet service kubelet": exit status 1 (291.462399ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (305.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2928421873 start -p stopped-upgrade-250107 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2928421873 start -p stopped-upgrade-250107 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (37.977034652s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2928421873 -p stopped-upgrade-250107 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2928421873 -p stopped-upgrade-250107 stop: (1.253042982s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-250107 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1213 11:21:48.079826  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:24:47.365121  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:25:12.240814  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-250107 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m26.097403318s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (305.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-250107
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-250107: (2.239358963s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.24s)

                                                
                                    
x
+
TestPause/serial/Start (56.04s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-495368 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E1213 11:31:48.079780  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-495368 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (56.036854108s)
--- PASS: TestPause/serial/Start (56.04s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.36s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-495368 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-495368 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.335972669s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.36s)

                                                
                                    
x
+
TestPause/serial/Pause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-495368 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.78s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-495368 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-495368 --output=json --layout=cluster: exit status 2 (323.013093ms)

                                                
                                                
-- stdout --
	{"Name":"pause-495368","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-495368","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-495368 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.88s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-495368 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.88s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.58s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-495368 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-495368 --alsologtostderr -v=5: (2.582756744s)
--- PASS: TestPause/serial/DeletePaused (2.58s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-495368
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-495368: exit status 1 (18.112158ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-495368: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-270721 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-270721 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (297.176628ms)

                                                
                                                
-- stdout --
	* [false-270721] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:32:45.194353  551102 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:32:45.194948  551102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:32:45.194966  551102 out.go:374] Setting ErrFile to fd 2...
	I1213 11:32:45.194973  551102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:32:45.195519  551102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
	I1213 11:32:45.196180  551102 out.go:368] Setting JSON to false
	I1213 11:32:45.197268  551102 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":15318,"bootTime":1765610247,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 11:32:45.198622  551102 start.go:143] virtualization:  
	I1213 11:32:45.202976  551102 out.go:179] * [false-270721] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:32:45.207165  551102 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:32:45.207388  551102 notify.go:221] Checking for updates...
	I1213 11:32:45.213885  551102 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:32:45.217351  551102 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
	I1213 11:32:45.221049  551102 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
	I1213 11:32:45.231891  551102 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:32:45.235974  551102 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:32:45.240856  551102 config.go:182] Loaded profile config "kubernetes-upgrade-415704": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 11:32:45.241035  551102 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:32:45.281450  551102 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:32:45.281660  551102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:32:45.362342  551102 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:32:45.346034875 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:32:45.362460  551102 docker.go:319] overlay module found
	I1213 11:32:45.365612  551102 out.go:179] * Using the docker driver based on user configuration
	I1213 11:32:45.368408  551102 start.go:309] selected driver: docker
	I1213 11:32:45.368438  551102 start.go:927] validating driver "docker" against <nil>
	I1213 11:32:45.368454  551102 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:32:45.372123  551102 out.go:203] 
	W1213 11:32:45.375047  551102 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1213 11:32:45.377981  551102 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-270721 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-270721

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-270721

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-270721

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-270721

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-270721

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-270721

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-270721

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-270721

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-270721

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-270721

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-270721

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-270721" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-270721" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 11:20:41 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-415704
contexts:
- context:
cluster: kubernetes-upgrade-415704
user: kubernetes-upgrade-415704
name: kubernetes-upgrade-415704
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-415704
user:
client-certificate: /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/kubernetes-upgrade-415704/client.crt
client-key: /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/kubernetes-upgrade-415704/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-270721

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270721"

                                                
                                                
----------------------- debugLogs end: false-270721 [took: 3.361774209s] --------------------------------
helpers_test.go:176: Cleaning up "false-270721" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-270721
--- PASS: TestNetworkPlugins/group/false (3.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (58.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-624185 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1213 11:34:47.365625  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-624185 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (58.436315767s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (58.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-624185 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [12243156-b16b-4f05-9e5a-3a481734e37b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [12243156-b16b-4f05-9e5a-3a481734e37b] Running
E1213 11:35:12.240544  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004277263s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-624185 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-624185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-624185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.101058975s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-624185 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-624185 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-624185 --alsologtostderr -v=3: (12.165104134s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-624185 -n old-k8s-version-624185
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-624185 -n old-k8s-version-624185: exit status 7 (77.327425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-624185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (53.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-624185 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-624185 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (53.122349696s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-624185 -n old-k8s-version-624185
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (53.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-p98zm" [bf5e5fd2-eaea-41cf-91a3-a1db8722872a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003828393s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-p98zm" [bf5e5fd2-eaea-41cf-91a3-a1db8722872a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00318976s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-624185 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-624185 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-624185 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-624185 -n old-k8s-version-624185
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-624185 -n old-k8s-version-624185: exit status 2 (374.871164ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-624185 -n old-k8s-version-624185
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-624185 -n old-k8s-version-624185: exit status 2 (362.021651ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-624185 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-624185 -n old-k8s-version-624185
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-624185 -n old-k8s-version-624185
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (56.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-951675 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-951675 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (56.3984365s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (56.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-951675 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [b3c71aef-b5d0-4a24-9a57-0610d741917d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [b3c71aef-b5d0-4a24-9a57-0610d741917d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003598118s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-951675 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-951675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-951675 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-951675 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-951675 --alsologtostderr -v=3: (12.136324326s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-951675 -n embed-certs-951675
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-951675 -n embed-certs-951675: exit status 7 (79.204845ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-951675 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-951675 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-951675 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (49.838394001s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-951675 -n embed-certs-951675
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-xvnrw" [3d3a4185-3ae9-498f-98cd-9bd44752b0ef] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003037073s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-xvnrw" [3d3a4185-3ae9-498f-98cd-9bd44752b0ef] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004299274s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-951675 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-951675 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-951675 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-951675 -n embed-certs-951675
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-951675 -n embed-certs-951675: exit status 2 (328.464405ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-951675 -n embed-certs-951675
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-951675 -n embed-certs-951675: exit status 2 (346.849347ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-951675 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-951675 -n embed-certs-951675
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-951675 -n embed-certs-951675
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-191845 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1213 11:39:47.366815  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:40:08.208972  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/old-k8s-version-624185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:40:08.215387  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/old-k8s-version-624185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:40:08.226858  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/old-k8s-version-624185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:40:08.248260  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/old-k8s-version-624185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:40:08.289812  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/old-k8s-version-624185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:40:08.371735  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/old-k8s-version-624185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:40:08.533320  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/old-k8s-version-624185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:40:08.855141  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/old-k8s-version-624185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:40:09.496912  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/old-k8s-version-624185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:40:10.778605  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/old-k8s-version-624185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-191845 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (50.121846225s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-191845 create -f testdata/busybox.yaml
E1213 11:40:12.240984  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [610d0153-e857-4ae3-8ec4-989700db0e24] Pending
E1213 11:40:13.340187  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/old-k8s-version-624185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [610d0153-e857-4ae3-8ec4-989700db0e24] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [610d0153-e857-4ae3-8ec4-989700db0e24] Running
E1213 11:40:18.461455  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/old-k8s-version-624185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.008816405s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-191845 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-191845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-191845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.0115052s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-191845 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-191845 --alsologtostderr -v=3
E1213 11:40:28.703767  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/old-k8s-version-624185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-191845 --alsologtostderr -v=3: (12.092088846s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-191845 -n default-k8s-diff-port-191845
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-191845 -n default-k8s-diff-port-191845: exit status 7 (71.535032ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-191845 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-191845 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1213 11:40:49.185249  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/old-k8s-version-624185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-191845 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (47.822405466s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-191845 -n default-k8s-diff-port-191845
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-xc6gn" [9f3946c8-1ca4-4dc5-8b35-cf58426d5ae7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003605216s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-xc6gn" [9f3946c8-1ca4-4dc5-8b35-cf58426d5ae7] Running
E1213 11:41:30.146623  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/old-k8s-version-624185/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003906035s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-191845 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-191845 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-191845 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-191845 -n default-k8s-diff-port-191845
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-191845 -n default-k8s-diff-port-191845: exit status 2 (328.936804ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-191845 -n default-k8s-diff-port-191845
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-191845 -n default-k8s-diff-port-191845: exit status 2 (326.447262ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-191845 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-191845 -n default-k8s-diff-port-191845
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-191845 -n default-k8s-diff-port-191845
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-333352 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-333352 --alsologtostderr -v=3: (1.344054957s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-333352 -n no-preload-333352
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-333352 -n no-preload-333352: exit status 7 (85.777072ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-333352 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-796924 --alsologtostderr -v=3
E1213 11:51:48.079912  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-796924 --alsologtostderr -v=3: (1.308829776s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-796924 -n newest-cni-796924
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-796924 -n newest-cni-796924: exit status 7 (83.709561ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-796924 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-796924 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (52.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-270721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-270721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (52.127033497s)
--- PASS: TestNetworkPlugins/group/auto/Start (52.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-270721 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-270721 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-mj8nl" [06055090-54a9-46bd-8bea-1cdf62eee293] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-mj8nl" [06055090-54a9-46bd-8bea-1cdf62eee293] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004021979s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-270721 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-270721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-270721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (55.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-270721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-270721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (55.592907786s)
--- PASS: TestNetworkPlugins/group/flannel/Start (55.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-ggpmf" [e92d6279-b8f1-4cc3-8ac4-36fdaa12559a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00401737s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-270721 "pgrep -a kubelet"
I1213 12:00:38.576001  308915 config.go:182] Loaded profile config "flannel-270721": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-270721 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-hjncn" [832a3a97-4c9d-4a97-9330-b8776ef18383] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-hjncn" [832a3a97-4c9d-4a97-9330-b8776ef18383] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003672642s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-270721 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-270721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-270721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (58.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-270721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-270721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (58.277324291s)
--- PASS: TestNetworkPlugins/group/calico/Start (58.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-nhhx8" [962d4003-c8af-4d22-90ed-c717d0f14710] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-nhhx8" [962d4003-c8af-4d22-90ed-c717d0f14710] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004527761s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-270721 "pgrep -a kubelet"
I1213 12:02:16.210551  308915 config.go:182] Loaded profile config "calico-270721": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-270721 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-jh4bj" [1a4a8fb1-c58a-4c41-bf41-28f59af5789f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-jh4bj" [1a4a8fb1-c58a-4c41-bf41-28f59af5789f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004115496s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-270721 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-270721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-270721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-270721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-270721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m4.724502432s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-270721 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-270721 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-lffl8" [a11614ef-1cf6-4432-be40-9321286f122f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-lffl8" [a11614ef-1cf6-4432-be40-9321286f122f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004325926s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-270721 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-270721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-270721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (52.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-270721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-270721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (52.07074815s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (52.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-qhxlc" [a4450621-0705-4122-b51c-faaf551242a4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004068524s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-270721 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-270721 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-8mnv9" [55cfceea-ee78-4d39-bc14-ed9f939209d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-8mnv9" [55cfceea-ee78-4d39-bc14-ed9f939209d2] Running
E1213 12:05:29.186457  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/auto-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004222521s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-270721 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-270721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-270721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (72.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-270721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-270721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m12.542656855s)
--- PASS: TestNetworkPlugins/group/bridge/Start (72.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (76.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-270721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1213 12:06:48.080362  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:06:51.108786  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/auto-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:06:54.196080  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/flannel-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-270721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m16.671744836s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (76.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-270721 "pgrep -a kubelet"
I1213 12:07:08.738367  308915 config.go:182] Loaded profile config "bridge-270721": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-270721 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-dqv6f" [a492d6d8-d363-4d24-bd67-0a50793a5762] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1213 12:07:09.874391  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:07:09.880840  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:07:09.892196  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:07:09.913517  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:07:09.954868  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:07:10.036238  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:07:10.197578  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:07:10.519866  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:07:11.162108  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:07:12.443947  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-dqv6f" [a492d6d8-d363-4d24-bd67-0a50793a5762] Running
E1213 12:07:15.005467  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/calico-270721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.00377997s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-270721 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-270721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-270721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-270721 "pgrep -a kubelet"
I1213 12:08:04.246208  308915 config.go:182] Loaded profile config "enable-default-cni-270721": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-270721 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-xfpfz" [8e656a2b-16ad-4918-bf90-89f82a84d298] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-xfpfz" [8e656a2b-16ad-4918-bf90-89f82a84d298] Running
E1213 12:08:08.121528  308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/no-preload-333352/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003520233s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-270721 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-270721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-270721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    

Test skip (38/417)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.45
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
379 TestStartStop/group/disable-driver-mounts 0.16
392 TestNetworkPlugins/group/kubenet 3.63
400 TestNetworkPlugins/group/cilium 3.88
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.45s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-336812 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-336812" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-336812
--- SKIP: TestDownloadOnlyKic (0.45s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-823668" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-823668
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-270721 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-270721

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-270721

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-270721

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-270721

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-270721

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-270721

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-270721

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-270721

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-270721

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-270721

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-270721

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-270721" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-270721" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 11:20:41 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-415704
contexts:
- context:
cluster: kubernetes-upgrade-415704
user: kubernetes-upgrade-415704
name: kubernetes-upgrade-415704
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-415704
user:
client-certificate: /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/kubernetes-upgrade-415704/client.crt
client-key: /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/kubernetes-upgrade-415704/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-270721

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270721"

                                                
                                                
----------------------- debugLogs end: kubenet-270721 [took: 3.413271133s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-270721" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-270721
--- SKIP: TestNetworkPlugins/group/kubenet (3.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-270721 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-270721

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-270721

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-270721

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-270721

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-270721

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-270721

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-270721

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-270721

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-270721

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-270721

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-270721

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-270721" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-270721

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-270721

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-270721

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-270721

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-270721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-270721" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 11:20:41 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-415704
contexts:
- context:
cluster: kubernetes-upgrade-415704
user: kubernetes-upgrade-415704
name: kubernetes-upgrade-415704
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-415704
user:
client-certificate: /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/kubernetes-upgrade-415704/client.crt
client-key: /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/kubernetes-upgrade-415704/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-270721

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-270721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270721"

                                                
                                                
----------------------- debugLogs end: cilium-270721 [took: 3.715125298s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-270721" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-270721
--- SKIP: TestNetworkPlugins/group/cilium (3.88s)

                                                
                                    
Copied to clipboard